The AI Layoffs
For most of the economy, the "AI" recession looks much like other recessions. That's good news for anxious policy-makers; at least they know what to expect. The much bigger hazard will be the financial wildfire that starts in the tech industry itself.
It isn't all that hard to predict the limitations of an emerging technology. What's hard is to anticipate how it will interact with the economy and society.
The reality of the Metaverse was computer screens that you wore on your face. The resolution and frame rate of these screens improved for a few years, much like non-face-attached monitors. However, being able to see motion that you can't feel makes you sick. Extending the simulation in a way that eliminates the mismatch is not possible under currently known laws of physics. So, all of the marketing in the world can't change the fact that the product makes a lot of people barf. It is an inherent limitation to the technology.
The Metaverse faded out into a handful of niche VR and AR products, none of which are big business. There are some industrial applications, but medical applications are bogged down in the regulatory regime that takes 20 years to license much simpler devices.[1] There isn't even much interest from the game industry at this point. The "makes users barf" problem might have been surmountable had it not been for the "solves no real problem" problem.
Likewise, it was obvious from the beginning that Bitcoin could not replace Visa, let alone the US dollar. It consists of a distributed database scheme, with a transaction settlement mechanism that is millions of times slower and more expensive.[2] There is no way to improve or work around this problem while still retaining the properties that make Bitcoin different from Visa. It is an inherent limitation to the technology.
There was never going to be any way to get people to wait half an hour for their payment to clear at Walmart. Neither is there any reason for a normal person doing a large transaction to put up with the bizarre[3] hazards[4] that the dollar system solved a century ago. By any technical analysis, bitcoin should have disappeared. But it didn't quite. It devolved into a weird bazaar hanging around the fringes of the normal financial system, mostly populated by scammers, and a payment system for large crimes. The reaction of the regulatory system couldn't be predicted from the properties of bitcoin--in particular, the twist where the crime and grift was adopted by the president of the United States and his family.
Now we have the large language model. In the beginning there was more uncertainty about what it could do, compared to bitcoin or VR goggles. Nonetheless, most of the serious people with real AI expertise, that were not being paid to say otherwise, knew from the beginning that LLMs would never solve the "hallucination" problem.[5] And that they would never lead to "general intelligence" of any form that we find interesting. These are limitations inherent to the technology.
A machine that can churn out blocks of words but can never reliably answer any question doesn't have much use to a business. So the LLM is yet another big-tech product that exists because big-tech would really like it if you would send them more money, and not really any other reason. But as we have seen, being useless doesn't mean it will go away.
Here in the 4th year of the Sam-altmanocene, it looks like there are two universes. About half of Linkedin consists of people that are unhappy that their environment is overrun with GenAI slop. People trying to hire get slop applications. People looking for work are discouraged by slop postings. Cybersecurity people are dealing with both faster exploits, and a torrent of slop problem reports. Government procurement people put out slop RFPs and get slop responses.
The other half is relentlessly pounding the claim that their LLM subscription is changing the world, and if you don't do the same you will be left behind. This stream is highly dubious,[6] but there are at least some actual people that are GenAI true believers.
We are at an unstable in-between stage where both realities exist. Fundamentally what's happened is that writing used to be a lot more work than reading, and now it isn't. "Knowledge work" was structured around the assumption that writing served as proof of work. A very general pattern is that many people generate a kind of raw material (a rough draft, a job application, a demo tape) that goes to one person who applies a kind of editorial judgement. This is stable when the reading is fast and the writing is slow. It's breaking down because slop is not good enough to stand on its own, but it is good enough to waste the reviewer's time. This type of adoption has the net effect of shifting burdens onto the people that were already functioning as quality control, either because it's the structure of their job or because they just care more. That's why, much more than VR or bitcoin, people that don't want to use LLMs actively hate them.
LLMs used as code generators are at a slightly more sophisticated stage of adoption. The tradeoff is still basically "five hours of debugging can save you an hour of writing code," but strong automatic testing improves the ratio for some people. Code generators have their own long history that gives some clues how this will proceed. Suffice it to say that from the perspective of Layer Aleph -- which gets called when the complexity is overwhelming, the stakes are high, and the usual playbooks aren’t working --LLMs are exciting, because organizations are digging themselves into unsustainable holes more quickly.
An "edge AI" world in which the common worker phones in their job with a 20 minute ChatGPT session that becomes hours of work for their boss will not last. If the chatbot doesn't add enough value, people will stop using it. If it adds too much, the corporation will take it for itself. There isn't an in-between Goldilocks zone because in fact the zones overlap: thanks to the Dunning-Krueger effect, everyone believes GenAI is more capable of replacing other peoples' jobs than their own.
And so, we've had a year of CEOs declaring "AI" layoffs that are ostensibly caused by productivity gains that don't exist. No doubt there is a spectrum from CEOs that actually fell for the hype, to those that know that the layoff itself is what will boost profits in the short term. Remember that the cycle was jump-started by the big tech companies themselves, who were widely seen as bloated and in need of a correction.[7] In a bad enough job market, most businesses can cut staff and force the survivors to take on more work, which they will, because they are afraid of being the next ones fired. Each layoff makes the job market worse, so it is a self-accelerating cycle. For most industries, the next few years are probably a very normal-looking recession.
In the tech industry itself, the AI boom came from the same place as web3 and metaverse. Overextended companies have saturated the attention-selling business, have no other ideas, and are under increasing pressure to meet "line go up" financial demands. They are desperate. Here are two strong pieces of supporting evidence:
-
The attention-selling platforms are squeezing so hard for the last few pennies that they are killing themselves. Google search, Facebook, Instagram, and the thing formerly called Twitter are all now bad experiences that are getting worse. More and more of those ad clicks are bots posting slop to be "seen" by other bots. It's not even a secret anymore that around 10% of Facebook's revenue is skimmed from outright fraud.[8]
-
OpenAI gave up the game when it announced that it was pivoting to focus on ads and "erotica." This is an astonishing admission from a company that claims to have invented a machine that can outperform humans across the entire world of work. If you had a machine that did that, why would you rent it at all, when you could just, like, run successful businesses? But what OpenAI is actually doing to monetize it is sell ads, and porn, and wheedle people to buy $20 monthly subscriptions. Does that sound more like the next era for humankind, or a retread of the last one?
If, for most of the economy, the "AI" recession continues to look much like other recessions, that's good news for anxious policy-makers. At least they know what to expect. The much bigger hazard will be the financial wildfire that starts in the tech industry itself. With trillions of dollars in leveraged financing, much of it circular, Enron is not a good model because it's too small. The 2008 subprime mortgage meltdown is also too small, but it's closer. The real existential "AI" questions, whether you are an investor or a policy-maker, are: Who will be the Lehman Brothers and Bear Stearns that get zeroed out? Who will be the JP Morgan that takes over the assets for pennies on the dollar, probably assisted by a giant bailout? And when?
https://onezero.medium.com/nfts-arent-as-stupid-as-you-think-bffab89697e3 ↩︎
https://dig.watch/updates/crypto-exchange-40bn-bitcoin-payout-error ↩︎
https://futurism.com/future-society/bitcoin-guthrie-ransom ↩︎
Also me, but I have not claimed to have real AI expertise or to
be a serious person. ↩︎It's mostly paid promotion, and obviously many of the testimonials are themselves GenAI slop. Not to mention: if you are doing so great, why do you care what I do? Why don't you just win by being a better programmer or whatever? ↩︎
https://www.wsj.com/economy/jobs/u-s-companies-are-still-slashing-jobs-to-reverse-pandemic-hiring-boom-abf1b94e ↩︎
https://finance.yahoo.com/news/meta-earned-16-billion-scam-121500409.html ↩︎