I’ve said it before, but it’s worth saying again. We all know deep down that the current wave of generative AI is a bubble of hype, and some of us fear that it is a dangerous one.

If you doubt that, consider the sight of Sam Altman, entrepreneur superstar, breaking the news to the World Economic Forum in Davos, that the AI storm led by his vehicle OpenAI, is going to demand so much energy that we also need energy from his fantasy fusion startup Helion.

Davos is exactly the right forum for a statement like that. Once a year the super-rich get together in a ski resort, attended by elected and unelected world leaders, and examine their navels. January is normally a news desert, so the world is treated to their latest ideas about how they can save it through any means which happens to be the current fashion among the super-rich.

Microsoft says get on with it

OpenAI is now funded - let’s be honest, effectively owned - by Microsoft, which is building and bankrolling the vast cloud resources that ChatGPT wolfs down, and is ready to simply bring the whole thing in-house if anyone rocks the boat.

Microsoft is now worth $3 trillion, and CEO Satya Nadella got star treatment at Davos, granted an on-stage chat with Davos founder Klaus Schwab, and massive reporting of his thoughts on the technology - essentially, we have to plow on with doing it, while also setting up regulations for any unintended consequences.

While he was saying that, Altman was certainly bugging up the "doing it" agenda, talking of growth in AI so big it needs fusion.

It’s Microsoft that has given Altman's Helion credibility, by signing a power purchase agreement to take 50MW of power from the fusion firm company by 2028. Yes, five years to productize, and deliver a technology that doesn’t work, in a sector where permitting cannot possibly take less than ten years.

Altman is right about the energy demands. If we accept that we “need” generative AI, then generative AI certainly needs energy. Demands for current sites are now so large that data centers devoted to it are having to locate away from the traditional hubs, in places like Iceland, or Norway, or at the Edge. The traditional locations were already flooded with a giant boom. This is another boom on top of that.

And yes, I know that data center electricity use is still only a percentage point or two of overall electricity consumption, but we are pushing at efficiency limits, and the kind of growth predicted now could plausibly use as much power as the Netherlands within a few years. That prediction, from the Digiconomist's Alex de Vries, is based simply on Nvidia’s manufacturing and sales plans.

In that context, and in the fawning forum of Davos, a fantasy fusion project fits just right.

It’s only marginally more satirical than Del Complex, the teasing art-project which proposed giant AI barges performing AI miracles and tinkering with human intelligence, where the super-rich love to be, outside territorial waters. I’m Googling, and as far as I can tell, Del Complex wasn’t actually at Davos, but it was there in spirit.

The industry likes to talk about net-zero, just as Davos devotees like to talk about ending excessive salaries and elitism. The reality might involve some real changes.

But wait, why?

Beyond the obvious worries about energy though, there is a real question: what are we doing with AI, and why?

This is a different question from the legislation Nadella is talking about, against consequences like the bogeyman of artificial general intelligence (AGI). That fear, boosted by the likes of Elon Musk, is actually a paradoxical part of the AI bandwagon.

The real dangers are subtly explored by AI researcher and Cambridge professor of interdisciplinary design Alan Blackwell in a book coming out this year called Moral Codes.

He makes some salient points right from the start. Computers were supposed to automate drudgery, but that hasn’t happened. Too many jobs now are “bullshit jobs” which do meaningless tasks, while generative AI is being groomed to do creative work, even though what generative AI actually does has been succinctly summed up by AI professor Rodney Brooks: “It just makes up stuff that sounds good.”

The information explosion is creating a surfeit of data, and the corollary of that is an attention deficit. Humans simply don’t have the cognitive capacity to understand and evaluate all the content and data that is being produced, so we need machines to do it for us, to read all the stuff we can’t and in a massively unreliable way, tell us about it.

“There are two ways to win the Turing Test,” says Blackwell. “The hard way is to build computers that are more and more intelligent, until we can’t tell them apart from humans. The easy way is to make humans more stupid, until we can’t tell them apart from computers.”

I’m still absorbing the thoughts in Blackwell’s book. At the very least, it’s a set of signposts towards dealing with the moral consequences of AI.

But while that moral work isn’t done, I really have doubts about the urgent need to press full steam ahead with the kind of giant AI projects we hear about so often.

Before the how, can we think about why?