Generative Artificial Intelligence (AI) has the potential to unlock trillions of dollars in value for businesses and radically transform the way we work. The groundbreaking technology has already inserted itself into nearly every sector of the global economy as well as many aspects of our lives with people already using AI to query their bank bills and even request medical prescriptions. Current predictions suggest generative AI could automate up to 70 percent of employees’ time today.

But regardless of the application or industry, the impact of generative AI can be most keenly felt in the cloud computing ecosystem.

As companies rush to leverage this technology in their cloud operations, it is essential to first understand the network connectivity requirements – and the risks – before deploying generative AI models safely, securely, and responsibly.

Accessing datasets

By their very definition, large language models (LLM) are extremely large so training such LLMs will require vast amounts of data and hyper-fast computing, and the larger the dataset the more the demand for computing power.

It is important to note that one of the primary connectivity requirements for training generative AI models in public cloud environments is affordable access to the scale of datasets and the enormous processing power required to train these LLMs is only one part of the jigsaw. Alongside this, other components to consider are managing the sovereignty, security, and privacy requirements of the data transiting in your public cloud.

In 2022, 39 percent of businesses experienced a data breach in their cloud environment. With this in mind, it makes sense to explore the private connectivity products on the market that have been designed specifically for high-performance and AI workloads.

Emerging regulatory trends in the landscape

The maze of regulatory frameworks globally is very complex and subject to change so companies should pay close attention to the key public policies and regulation trends which are rapidly emerging around the AI landscape.

Companies are now required to implement techniques such as data mapping and data loss prevention to make sure they know where all personal data is at all times and protect it accordingly. The approach can be referred to as a privacy-by-design approach and has been not only adopted by the developing mandates of the General Data Protection Regulation (GDPR) in Europe but also by the data privacy laws in the United States.

Imagine a multinational New York bank that houses 50 mainframes on its premises where it keeps its primary computing capacity. The aim is to use AI analysis on the data, but they cannot use the public internet to connect to these cloud environments because many of their workloads have regulatory constraints. Alternatively, private connectivity affords them the ability to access the generative AI capability that exists within the business's local regulatory frameworks.

Maintaining data sovereignty

As AI legislation continues to expand, the widespread adoption of generative AI technology will likely create long-lasting challenges around data sovereignty. The only way your company will have the assurance of maintaining your sovereign border may be to use a form of private connectivity while the data is in transit. And as the world becomes more digitally interconnected, this has prompted nations to define and regulate where data can be stored, and where the LLMs processing that data can be housed.

The same applies to AI training models on the public cloud; companies will need some type of connectivity from their private cloud to their public cloud where they do their AI training models, and then use that private connectivity to bring their inference models back.

One thing to note is that even though some national laws require certain data to remain within the country’s borders, this does not necessarily make it more secure. For example, if your company uses the public internet to transfer customer data to and from London on a public cloud service, even though it may be traveling within London, somebody can still intercept that data and route it elsewhere around the world.

The importance of latency and network congestion

With the volume of voice and video calls that we experience daily, we have all become latency-sensitive. What some do not realize is that latency is a critical factor in terms of interactions with people. Likewise, the massive datasets used for training AI models can lead serious latency issues on the public cloud.

As an example, if you’re chatting with an AI bot that's providing you customer service and latency begins to exceed 10 seconds, the dropout rate accelerates. Therefore, using the public internet to connect your customer-facing infrastructure with your inference models is potentially hazardous for a seamless online experience, and a change in response time could impact your ability to provide meaningful results.

Meanwhile, network congestion could impact your ability to build models on time. The way to overcome this is by having large pipes to ensure that you don't encounter congestion in moving your primary data sets into where you're training your language model. As a result, you will be able to avoid significant congestion – especially when transferring fresh data into LLMs which will undoubtedly cause backlog.

The negative consequences of improper AI governance

Governance is something that is being talked about right now because, without proper AI governance, there could be high consequences for companies that may result in commercial and reputational damage.

A lack of supervision when implementing generative AI models on the cloud could easily lead to errors and violations, not to mention the potential exposure of customer data and other proprietary information. Simply put, the trustworthiness of generative AI depends on how companies use it. In other words, who gets access to the data and where is the traceability of the approval of that data available?

The untold opportunities of Generative AI

Generative AI is a transformative field but IT leaders must avoid getting their network connectivity wrong before deploying its applications.

It is essential to define your business needs in relation to your existing cloud architecture because data accessibility is everything when it comes to Generative AI. Instead of navigating the risks of the public cloud, the high-performance flexibility of a Network-as-a-Service (NaaS) platform can provide forward-thinking companies with a first-mover advantage.

For example, a NaaS solution incorporates the emerging network technology that supports the governance requirements of generative AI for both your broader business and the safeguarding of your customers.

Adopting AI systems by interconnecting your clouds with a global network infrastructure that delivers fully automated switching and routing on demand can be made simpler by NaaS agility.