Defense team sets up AI-based security for the Cyber Grand Challenge
The US Defense Advanced Research Projects Agency (DARPA) has knocked up a small, liquid cooled data center in just 29 hours ahead of the Cyber Grand Challenge (CGC).
Taking place on August 4, CGC aims to test the possibility of using artificial intelligence security systems that can find and patch security holes without human intervention.
A quick data center for quicker bots
The US military body, which commissions advanced research for the Department of Defense, said “is that not the handsomest liquid-cooled data center you’ve ever seen?”, and tweeted:
While much of DARPA’s work is directly applicable to military uses, it has also helped advancements such as computer networking and graphical user interfaces, with its most notable achievement being its involvement in the creation of the Internet.
This data center was built for the CGC, which aims to lead to innovations for both military and commercial use. The challenge will pit the creations of seven teams of cyber security experts against each other.
The seven supercomputers, made of seven racks of servers, will each come loaded with an autonomous security bot from one of the teams. The bots will be loaded with software from DARPA that none of the bots have seen before, and they will have to patch holes in their own system while exploiting the holes in other systems, all without any help from humans.
DARPA Grand Challenge Commentator Visi told Wired: “I’m actually really looking forward to the fact that we expect many emergent properties that maybe even the teams didn’t expect to have occur, where their system is reasoning about a patch and can decide to do it better than we’ve ever done it before.”
Autonomous security bots are an area of intense research, with the promise of AI systems that can defend (and potentially attack) of obvious merit to technology firms in a world increasingly connected.
Google has already turned to machine learning to help protect Android, while Baidu has used deep neural networks to spot malware. Start ups such as Deep Instinct and Cylance equally use neural nets to try and find malware.
But all of the systems are still in their very early stages and are unable to compete with a talented individual hacker, let alone a large team, for example in the case of nation-state cyber warfare.