I've been through four recessions in my adult lifetime, and did OK through all of them, but the coming bust looks like it will be bigger than anything we've seen since the Great Depression.

And the U.S. will have to weather that storm under the most inept and corrupt national government we’ve known.

How should I prepare for that? I have no answers, and have been continuing along as we always have been.

I hope that living in California will provide protection.

This is something that keeps me awake staring wide-eyed at the ceiling at 4 am in the dark.

Previously

Here’s something I saw walking the dog, giving a Lincoln Lawyer vibe. I have not seen this car or license plate before. It was parked about a half-dozen houses away from us on our street.

Lately when I think of going to the movies I think of driving across town, parking and paying money to sit in a dark room and watch things on a screen. I have screens at home.

"Workslop" is the result of employees using AI to do shoddy work and pass the work of fixing it on to others

“Workslop: Bad study but an excellent word”, by David Gerard at Pivot To AI:

The word of the day is: “workslop.” There’s a new article in Harvard Business Review: “AI-Generated ‘Workslop’ Is Destroying Productivity.” [HBR]

Workslop is when a coworker sends you some obvious AI-generated trash and you have to spend your time redoing the whole thing. They save time by wasting your time:

Workslop is a result of top-down AI mandates, Gerard says. However, the report identifying the trend is an “unlabeled advertising feature” for enterprise AI, not a real study. The report blames workers, but bad management is the real culprit.

The real (economic) AI apocalypse is nigh

Cory Doctorow: " … a third of the stock market is tied up in seven AI companies that have no way to become profitable and … this is a bubble that’s going to burst and take the whole economy with it…. "

I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can’t be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people’s money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don’t get it, they will halt the flow of billions of dollars. Anything that can’t go on forever eventually stops.

Cory’s advice to Cornell University, during a visit to lecture there:

I told them that they should be planning to absorb the productive residue that will be left behind after the bubble bursts:

https://locusmag.com/feature/commentary-cory-doctorow-what-kind-of-bubble-is-ai/

Plan for a future where you can buy GPUs for ten cents on the dollar, where there’s a buyer’s market for hiring skilled applied statisticians, and where there’s a ton of extremely promising open source models that have barely been optimized and have vast potential for improvement.

There’s plenty of useful things you can do with AI. But AI is (as Princeton’s Arvind Narayanan and Sayash Kapoor, authors of AI Snake Oil put it), a normal technology:

https://knightcolumbia.org/content/ai-as-normal-technology

That doesn’t mean “nothing to see here, move on.” It means that AI isn’t the bow-wave of “impending superintelligence.” Nor is it going to deliver “humanlike intelligence.”

It’s a grab-bag of useful (sometimes very useful) tools that can sometimes make workers' lives better, when workers get to decide how and when they’re used.

That’s what a big business should do. But what about individuals? That’s something I’ve been thinking about, and getting nowhere.