In the near future, using advanced AI technologies like ChatGPT, Baidu's ERNIE, or Google's BARD to generate content will become commonplace, just as we use a calculator to perform calculations. Even though everyday use of this technology is still in its early stages, we can expect it to be as common a work tool as using a word processing app or spreadsheet.

Tung_Nguyen_Pixabay_ai-7111802_1920.jpg
– Tung Nguyen, Pixabay

But organizations should be mindful of how the implementation of AI chatbots in the workplace could create unease and confusion, and take proactive steps to ensure employee morale is not adversely affected. Any kind of automation can make people nervous, and worried that it’s going to take their job.

There are other downsides, too, such as the potential to expose organizations to new security threats. AI chatbots ought to facilitate smoother and more intuitive interactions between humans and machines, which would clearly be beneficial for various applications, including workplace communication. However, this doesn't take into account that human behaviour isn't always positive.

According to a recent Blackberry report, more than half of IT decision-makers believe there is a high likelihood of a ChatGPT-fueled cyber attack occurring before the end of the year. Furthermore, 71 percent think that nation-states may already be using this technology against other countries. Is this paranoia? Or should security teams prepare for the chatbots effect?

Chatbot misuse

Boosting phishing scams

Phishing emails try to trick you into giving out sensitive information and installing malicious software. Sometimes these are sent en masse, like a net trawling for anything it can catch, other times it is way more targeted, aka spear phishing. Using chatbots, criminals may have a higher chance of success, as spear phishing becomes not much more difficult than sending out non-targeted emails.

For context, chatbots have safeguards that means that a chatbot will refuse to write a phishing email if you ask it to. Hackers, however, are always looking to subvert these safeguards, and this new avenue of attack mean that chatbots could make it far more difficult to distinguish legitimate correspondence from malicious content.

Malware attacks

The potential harm of chatbots is not limited to phishing and the possibility of lower-quality code that can easily be exploited. AIs such as GitHub’s Copilot have made some experts nervous, worried that AI-created code will have vulnerabilities that experienced coders would catch, and that reliance on these tools will mean malware will have an easier time.

Not only that, but malware creation may be easier. Check Point researchers believe that chatbots could be invaluable tools for automated malware creation, enabling attackers to create malicious software faster than ever before.

According to a report by Recorded Future, ChatGPT has limited programming and technical abilities right now, but these will only get better with time. Like everything in security, this is an arms race—there is a pressing need for automated, potentially AI-driven, security tools and processes that can quickly identify, respond to, and defend against AI-generated malware attacks.

Misleading information and exposing sensitive data

The problem with chatbots and other AIs is that they do not truly understand what they are saying. It does not have the capability to verify whether the information it collects is accurate or not. Therefore, it is important to be vigilant when consuming the content AI generates and to seek out verified sources for more reliable information.

The algorithm is trained on a large corpus of information and is capable of generating any type of text, including responses to questions. But it does not have the ability to distinguish between true and false statements. Just as we should treat the (incredibly useful) free encyclopaedia, Wikipedia, with a healthy degree of scepticism, we should do the same here.

And since ChatGPT has trouble with context, it could potentially access confidential information that is not intended to be shared, leading to confidential information being compromised.

Counteracting these risks

Security departments are now in a pickle. Employers and employees must be aware of the potential risks of AI chatbots and must ensure that the use of the technology is adequately regulated and monitored – and that its use to attack businesses is mitigated.

Network detection

To ensure an organization's security, a real-time network monitoring system should be implemented to detect and act on any malicious activity as soon as it occurs. There are a couple of key requirements: two-factor authentication provides an extra layer of security, keeping software patched ensures that known vulnerabilities are removed, antivirus software helps to detect and block malware, and monitoring network traffic helps to detect malicious activity.

Battle of AIs

Plot twist: it’s possible to fight AI with AI. To stay ahead of the curve, it’s important to invest in a strong security system that uses the power of AI to identify and mitigate AI threats. New AI tools can also identify patterns in data that a human may not easily recognise, giving it an edge in uncovering potential chatbot-based cyber threats.

Education and upskilling

Knowing how to spot the tell-tale signs of a malicious email or link, such as a strange origin address, suspicious language, or unusual instructions, can help mitigate the risk of employees falling victim to a chatbot-based phishing expedition. CTOs should invest in educating staff to improve their ability to spot these threats. Senior staff should also be educated – they are often impersonated in these attacks and so they need to be approachable to confirm instructions that may seem unusual. They are also more frequently targeted, too.

Looking ahead

It will take some time before we find out exactly how useful and how much of a threat the new wave of AI chatbots will be. The lesson is that AI is rapidly evolving, and security professionals must keep up. But it is also new for cyber criminals, who will take time to make the most of it.

Every organization will need to understand its potential and limits in order to make the most of it as an opportunity and to take the necessary measures against the security risks they pose.

Subscribe to our daily newsletters