Receive free Artificial intelligence updates
We’ll send you a myFT Daily Digest email rounding up the latest Artificial intelligence news every morning.
Even by the breathless standards of previous technology hype cycles, the generative artificial intelligence enthusiasts have been hyperventilating hard.
Trillion-dollar companies, including Alphabet and Microsoft, declare that AI is the new electricity or fire and are re-engineering their entire businesses around it. Never knowingly outhyped, venture capital investors have been pumping money into the sector, too. Fifty of the most promising generative AI start-ups, identified by CB Insights, have raised more than $19bn in funding since 2019. Of these, 11 now count as unicorns with valuations above $1bn.
Even the sober suits at McKinsey estimate that the technology could add between $2.6tn to $4.4tn of economic value annually across 63 examples of use it analysed, ranging from banking to life sciences. In other words, in very rough terms, generative AI could create a new UK economy every year (the country’s gross domestic product was $3.1tn in 2021).
But what if they are wrong? In a series of provocative posts, the technologist Gary Marcus explores the possibility that we could see a “massive, gut-wrenching correction” in valuations as investors realise generative AI does not work very well and lacks killer business applications. “The revenue isn’t there yet, and might never come,” he writes.
Marcus, co-founder of the Center for the Advancement of Trustworthy AI who testified to US Congress this year, has been a longtime sceptic about the intelligence of neural network models that preceded the latest chatbots, such as OpenAI’s ChatGPT. But he raises some fresh truths about generative AI, too. Take the unreliability of the models themselves. As is now clear to millions of users, one of the technology’s biggest drawbacks is that it hallucinates — or confabulates — facts.
In his earlier book Rebooting AI, Marcus provides a neat example of how this can happen. Some AI models operate as probabilistic machines, predicting answers from patterns of data rather than exhibiting reasoning. A French speaker would instinctively understand Je mange un avocat pour le déjeuner as meaning “I eat an avocado for lunch”. But, in its early iterations, Google Translate rendered it as “I’m going to eat a lawyer for lunch”. In French, the word avocat means both avocado and lawyer. Google Translate picked the most statistically probable translation, rather than one that made sense.
The tech companies say they are reducing errors by improving the contextual understanding of their systems (Google Translate renders that French sentence accurately now). But Marcus argues hallucinations will remain a feature, rather than a bug, of generative AI models, unfixable using their current methodology. “There is a fantasy that if you add more data it will work. But you cannot succeed in crushing the problem with data,” he tells me.
For some users, this inbuilt unreliability is a deal-breaker. Craig Martell, the US Department of Defense’s chief AI officer, said last week he would demand a “five 9s” [99.999 per cent] level of accuracy before deploying an AI system. “I cannot have a hallucination that says ‘Oh yeah, put widget A connected to widget B’ — and it blows up,” he said. Many generative AI systems placed too high a “cognitive load” on the user to determine what was right or wrong, he added.
Even more concerning is the idea that content produced by generative AI is polluting the data sets on which future systems will be trained, threatening what some have called “model collapse”. By adding more imperfect information and deliberate disinformation to our knowledge base, generative AI systems are producing a further “enshittification” of the internet, to use Cory Doctorow’s evocative term. This means training sets will spew out more nonsense, rather than less.
Undaunted, investors typically make three arguments about how to make money out of generative AI. Even with its imperfections, they say, it can still be a valuable productivity tool, accelerating the industrialisation of efficiency. There are also many uses, ranging from copywriting to call centre operations, where a “two 9” level of accuracy is OK.
Second, investors are betting on the fact that some companies can deploy generative AI models to solve narrow, real-world problems. The latest advances in AI allow data to be analysed in real time, says Zuzanna Stamirowska, chief executive of the French start-up Pathway, helping to optimise maritime trade or the performance of aero engines, for instance. “We really focus on business use cases,” she says.
Third, generative AI models will enable the creation of new services and business models as yet unimagined. During the mass electrification of the economy in the late 19th century, companies profited from generating and distributing electricity. But the serious fortunes were made later by using electricity to transform ways of manufacturing things, such as steel, or inventing wholly new products and services, including domestic appliances.
For the moment, it is only the cloud computing providers and chip manufacturers that are really minting money in the generative AI boom. Doubtless, Marcus will also be proved right that much of the corporate money thrown at the technology will be wasted and most start-ups will fail. But who knows what new stuff will be invented and endure? That is why God invented bubbles.