ChatGPT invented a sexual harassment scandal and named a real law professor as the accused. What happens when ChatGPT lies about real people?

Pranshu Verma and Will Oremus at The Washington Post:

One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.

The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.

Was this week’s “Picard” the first time “Star Trek” dropped an F-bomb? Did they boldly go where they’d never gone before?

The gambler who beat roulette. For decades, casinos scoffed as mathematicians and physicists devised elaborate systems to take down the house. Then an unassuming Croatian’s winning strategy forever changed the game. (Bloomberg / Kit Chellel, with Vladimir Otasevic, Daryna Krasnolutska, Peter Laca and Misha Savic)

The poop emoji: a legal history (The Verge / Sara Jeong). Amusing story about a serious problem: Emoji are used in mainstream communications. Those communications are cited in lawsuits. Judges are often confused about what they mean; they’re now taking emoji classes. And legal databases can’t manage them.