Cast your mind back 12 months and, generally speaking, it probably feels like everything has changed. The pace of innovation is so much faster than I've ever anticipated – all largely driven by AI – and it shows no signs of slowing down.

Just a year ago we were predicting that AI will one day be able to generate video on demand. Three months later, the prediction was a reality. Compute density and compute capacity have more than doubled in less than 12 months. We’ve never really seen anything like this – and it makes predictions both hard and easy.

For one, we might speculate that innovation may take years, and that the first green shoots will emerge next year (first prediction below, I’m looking at you). But at the current pace of innovation, we may see what would normally be deemed long-term trends to track emerging almost in full within a calendar year. So in that respect, predictions are hard… but they’re also somewhat easier now than ever.

Imagine an innovation, and there’s a chance of it happening. With that in mind, I’ll avoid predicting something completely speculative and out of the box like “AI Holograms will replace pilots” – and now that I’ve said that, please: no one try and invent it. Instead, I’ll stick to predictions for which I’ve already seen inklings of emergence.

inteligencia artificial
– Getty Images

The first green shoots at artificial general intelligence (AGI) will emerge by mid-2025

Unlike generative AI, which needs prompts before proceeding, AGI refers to the ability of AI to learn, think, and act – in some cases autonomously. It can teach itself, and theoretically undertake tasks it hasn’t been trained on.

This advance is being driven by something akin to a modern-day space-race; AI companies themselves such as Google and OpenAI are developing increasingly sophisticated LLMs, which are essential components in the pursuit of AGI.

We’re beginning to see the emergence of multimodal AI systems, which are capable of processing and generating multiple data types such as text, video, images, and audio. This is seen as a step toward more generalized intelligence, and those first steps are already being taken.

Amazon, for instance, is creating a new multimodal AI system called Olympus that focuses on image and video analysis. The ongoing race between Google and OpenAI continues apace in this realm, with ChatGPT-4 omni being launched in May and Google responding quickly with its own multimodal AI system, Project Astra.

As these advances continue to emerge, the building blocks of AGI will begin to coalesce and evolve. As OpenAI’s CEO recently stated, from here AGI is “just an engineering problem.”

It’s an engineering problem that is eminently solvable. We just need a re-architecture of the infrastructure required to enable it: data centers customized for AI and much more bandwidth are now on the menu for new builds, and those new builds are starting as we speak.

Agentic AI workflows to become the AI du jour for enterprises

The chatbot in the past has answered some questions. Then they became assistants to do certain tasks well, like Copilot. But chatbots are basic assistants, albeit very effective and efficient ones.

We’re ready to move beyond basic now, and what we’re seeing is an evolution towards a digital co-worker – an agent. Agents are really those digital coworkers, our friends, that are going to help us to do research, write a text, and then publish it somewhere.

So you set the goal – let’s say, run research on some telco and networking predictions for next year – and an agent would do the research and run it by you, and then push it to where it needs to go to get reviewed, edited, and more. You would provide it with an outcome, and it will choose the best path to get to that outcome.

Right now, Chatbots are really an enhanced search engine with creative flair. But Agentic AI is the next stage of evolution, and will be used across enterprises as early as next year. This will require increased network bandwidth and deterministic connectivity, with compute closer to users – but these essentials are already being rolled out as we speak, ensuring Agentic AI is firmly on the agenda for enterprises in the new year.

Growing concerns over data privacy will lead to more scrutiny of decentralized learning

Amid the AI rush, we’ve been focused on the outcomes rather than the practicalities of how we’re accessing and storing the data being generated. But concerns are emerging. Where does the data go? Does it disappear in a big cloud?

Concerns are obviously being raised in many sectors, particularly in the medical space in which, medical records cannot leave state/national borders. But let’s say they want to analyze MRI images; they need to train on those images in order to build a larger AI that can actually go and create better results from the analytics.

But how can they possibly do that with a decentralized model where the data can quite easily cross borders?

In the new year, organizations that are concerned with privacy will start to adopt more centralized models – in effect, ensuring the AI training is done specifically and securely in the organizations’ home state or country. But the models will only share what they learn, thus the primary data that he used for training stays within the required borders.

To circle back to the health sector example, this can be effective when it comes to early disease detection, or even genome mapping, which relies on a lot of base data – that base data never leaves the centralized model, yet the lessons gleaned can be shared beyond.

As a result, you’ll see more sovereign training centers emerge here and abroad, with no connection to one another. We have often seen a move away from silos but privacy concerns ensure that at least when it comes to training, silos will re-emerge.

As you can see, predictions aren’t easy – but much like the minds behind the innovation itself, it pays to be bold.