MIT scientists have built an AI that can detect 85% of cyber attacks - but it still needs human help
The "AI2" algorithm, developed by MIT's Computer Science and Artificial Intelligence Lab (CSAIL) and machine learning startup PatternEx, can reportedly detect cyber attacks three times more effectively than today's current systems.
AI2 has been tested on 3.6 billion pieces of data, known as "log lines," which were created over a three month period by millions of people.
In order to predict attacks, AI2 scans sets of data and identifies suspicious activity. It does this by clustering the data into meaningful patterns using unsupervised machine-learning, according to MIT.
This activity is presented to human analysts who then confirm which events are cyber attacks worth worrying about. The human feedback is incorporated into AI 2 models so that it can get better at analysing data in the future.
CSAIL research scientist Kalyan Veeramachaneni said in a statement that AI2 can be thought of as a virtual analyst. "It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly," he said.
"The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions," Veeramachaneni said. "That human-machine interaction creates a beautiful, cascading effect."
On its first day of "training," AI2 identifies the 200 strangest events and gives them to the humans to analyse. MIT said that number will be reduced to 30 to 40 in a matter of days as AI2 learns how to spot an increasing number of attacks for itself.
Veeramachaneni presented a paper about the system at last week's IEEE International Conference on Big Data Security in New York City.