Generative Artificial Intelligence (GenAI) and applications such as ChatGPT have become the latest tech buzzwords
Gartner predicts GenAI will have a profound impact on business and society, positioning it on the peak of inflated expectation within its Hype cycle for emerging technologies, 2023, and projecting that it will reach transformational benefit within two to five years.
Despite being met with excitement by many, AI anxiety is spreading far and wide as people consider the impacts it could have on future jobs – and the potential security implications surrounding data.
You don’t have to look too hard through today’s headlines to find news surrounding AI and in particular, how the Government plans to tackle safety implications.
Earlier this year Prime Minister Rishi Sunak announced the UK’s plans to host the first major global summit, in order to mitigate the risks associated with mass AI implementation. Meanwhile, the EU Data Protection Supervisor has created a task force to assess GenAI systems.
With security and data sovereignty remaining a key concern, this begs the question – what is the reality of this cutting-edge technology in today’s business landscape?
GenAI in business
With the right know-how, GenAI promises to create conversational interfaces that engage customers in a more personalized way, leading to improved customer satisfaction and loyalty.
In the future it could also, when combined with other technologies and processes, empower organizations to provide round-the-clock support, increased efficiency and data-driven insights to improve future products and services.
The truth is – the concept of AI systems generating content predates the specific terminology used to describe it, and various related terms and ideas have been present in AI research for decades.
Organizations have been reaping the benefits of natural language models for some time now within the contact center, in order to optimize customer experience whilst boosting productivity.
While tools such as ChatGPT can certainly be seen as a development, Chatbots as a whole have also existed for many years, allowing customers to self-serve and find answers to common questions through AI-powered self-service portals.
Although GenAI tools such as these can be used for diverse purposes – including customer support, lead generation, and natural language processing – the context in which they are used must be considered carefully.
For example, using AI to personalize content for customers on a large scale can be achieved with minimal consequences. On the other hand, for any high-risk organization – for example, an NHS trust dealing with a patient, or a local council dealing with a social care requirement – relying solely on an AI decision could lead to disaster.
To reap the benefits on offer, it is important to acknowledge that a) a one-size-fits-all approach to GenAI simply won’t cut it and b) you need to have humans in the loop in order for it to be safe and effective.
As depicted by IBM’s Ginni Rometty ‘augmented intelligence’ is a much more accurate term – rather than replacing people, AI is a technology to augment human intelligence.
Be aware of the risks
A lack of human intelligence also poses wider concerns around the viability of tools such as ChatGPT. AI models can be prone to inheriting certain biases present in their training data, which can lead to potentially biased or discriminatory responses to user inputs.
Misinformation is another key concern when an AI model is not carefully supervised or fine-tuned. This can happen when an AI model may not fully understand the context of the question resulting in an inappropriate response.
The biggest risk for organizations, however, is the impact GenAI could have on data. As the hype around AI continues to grow, so does the apprehension surrounding security and privacy.
Last month, Snapchat was issued with a preliminary enforcement notice over potential failure to properly assess the privacy risk associated with its generative AI chatbot ‘My AI.’ The UK Information Commissioner’s investigation found that the social media platform Snap failed to adequately identify and assess the risks to several million ‘My AI’ users in the UK, including children aged 13 to 17.
Businesses need to be cautious about the data they feed into AI models, who by and where it is being processed (i.e., whether it is a controlled environment) – ensuring compliance with relevant data protection laws and regulations to protect customer privacy.
Whilst existing regulations such as GDPR are already applicable to GenAI systems, additional regulation surrounding AI will be critical to securing its future. Establishing guidelines and a set of standards for protecting public data will facilitate responsible AI development and implementation within the enterprise.
Making AI safe and applicable
Whilst some of the hype surrounding GenAI can be justified, it is also useful for two reasons. Firstly, exposure has raised awareness surrounding data sovereignty and privacy concerns when adopting new tools and technologies. Secondly, it has the potential to inspire businesses to think about their own AI usage.
Automating intelligently, however, requires businesses to consider what they want to achieve, and how. Making sure AI usage is safe and applicable must remain at the forefront of any transformation journey.
Platform-as-a-Service tools such as low-code platforms, which are designed as open and interoperable platforms, can support the implementation of ChatGPT and other GenAI applications by providing pre-built AI components, integration capabilities, and an easy-to-use visual interface for developers and non-technical users to design and deploy AI-powered applications.
For enterprises that implement AI in this manner, there are benefits to be had. Whilst the reality may be smaller than expected (at this stage) – the impact on customer and employee experience can still be significant if implemented effectively.
By adopting platforms that empower the simple and secure integration of AI, businesses can start experimenting with GenAI, taking small but safe steps towards their transformation goals.
More on generative AI
-
Generative AI & the future of data centers: Part I - The Models
A seven-part article on what large language models and what the next wave of workloads mean for compute, networking, and data center design
-
Generative AI & the future of data centers: Part II - The Players
Behind generative AI and its impact on the industry
-
Generative AI & the future of data centers: Part III - The Supercomputers
What's left for HPC in the world of generative AI?