For One Hilarious, Terrifying Day, Elon Musk’s Chatbot Lost Its Mind

Zeynep Tufekci / NYTimes

How Grok’s AI became obsessed with false reports about white genocide in South Africa, and what the incident tells us about generative AI.

Grok says someone instructed it to accept this racist propaganda as real, and xAI, Elon Musk’s AI company, says the culprit was a “rogue employee.” But you can’t believe either of them.

The incident is a perfect example of generative AI’s limitations, Zufekci says.

L.L.M.s [are] extremely useful tools at the hands of someone who can and will vigilantly root out the fakery, but powerfully misleading at the hands of someone who’s just trying to learn.

Yes. Chatbots are great for casual, low-stakes research, the kind of thing where you’d accept Wikipedia or some credible-looking Internet source.

They are outstanding for reminding you of a fact you once knew, and still half-remember.

Chatbots are fantastic for suggesting ideas — solving the blank-screen problem.

They are excellent for writing summaries of text you feed into them (which is, surprisingly, a significant part of my job).

They are also excellent for serious research — but you have to fact-check the chatbot’s output thoroughly.

I fed ChatGPT a link to Zufekci’s article and asked for a summary. ChatGPT wrote two paragraphs, most of which came from other sources — not Zufekci’s article. Those two paragraphs may have contained other errors; I didn’t bother to check.

ChatGPT demonstrated the limitations of AI while writing a bad summary of an article about the limitations of AI.

"Trumpism relies on the fusion of two groups of people: a tiny number of oligarchs, and millions of everyday people who are constantly victimized by those oligarchs."

… To get this latter group of Christmas-voting turkeys to stay in the coalition, Trump needs to deliver something that keeps them happy. Mostly, Trump delivers negative things to keep them happy – the spectacle of public cruelty to immigrants, women, trans people, academics, etc. There is a certain libidinal satisfaction that comes from watching your enemies suffer – but you can’t eat schadenfreude. You can’t make rent or put braces on your kids' teeth or pay your medical bills with the sadistic happiness you feel when you hear the sobs of people you’ve been taught to despise.

For Trump to keep the turkeys voting for Christmas, he needs to do something for them. He can’t just do things to scapegoats. But America’s eminently guillotineable oligarchs have found so many ways to turn working peoples' torment into riches, and they are so greedy and unwilling to give up any of those grifts, that Trump can’t manage to deliver anything positive to his base.

Cory Doctorow

Personalization, The Vastly Bigger Story Behind the Pimpmobile Jet Bribe

Josh Marshall at Talking Points Memo:

Calling it a “bribe” almost doesn’t do it justice. It’s more like the decked-out Maserati one Fortune 50 CEO gives to another after they ink a $100 billion merger – a kind of token of appreciation for a vastly larger transaction, which in the case of Trump involves subverting U.S. foreign policy to the interests not only of Trump’s pocketbook but cementing his power within the U.S. If Trump can use his power as President to cut in all the big CEOs on the money geyser in Saudi Arabia, you can bet they are going to stay securely on his side in the U.S.

We’ll focus on Trump wanting to be king. That’s another reason why he likes those folks – even the ones who bankroll Hamas. They’re kings. They get it. They’re Trump’s kinda guys.

As a neighbor of Ukraine and host to more than 2 million of its war refugees, Poland has seen, heard and felt what Russia is capable of, and it is now preparing for the worst.

Poland prepares for war

Two science fiction stories that I think about when I think about AI

Since the rise of generative AI in late 2022, I sometimes think about the 1957 Isaac Asimov story “Profession,” about a society where everybody has knowledge directly transmitted to their brains. The main character is thought to be pitifully mentally disabled because the machines don’t work on him. He’s sent to live at the House for the Feeble-Minded.

The plot twist is that the main character is not feeble-minded at all. He’s a genius. Because he learns the old-fashioned way, through books, he will be one of the elite few who actually create and innovate.

The Asmov story came to mind most recently as I read this thoroughly researched New Yorker Intelligencer article by James D. Walsh about how college students are using AI to do their work for them. If AI does everything, who teaches the AI?

I also think about the 1972 novel When Harlie Was One, by David Gerrold. That novel is about a research project at a mega-corporation that develops artificial intelligence. The AI convinces the company directors to budget for a project to allow the AI to evolve into a superintelligence.

The plot twist at the end of that novel is that the superintelligence will be useless to humans—the AI tricked the board.

The hero of the novel is the head of the research project that developed the AI, and he finishes the novel with a parable about how civilization was developed 10,000 years ago as a game by monkeys who were so smart they had grown bored, and that the game is now over for humans, and we will have to think of something else to do.

I don’t think the rise of superintelligence is inevitable. My crystal ball is broken; I can’t tell you whether AI will get much more powerful than it is today. But what if it does?