May 12, 2016
There is an ongoing war between hackers and information security systems. It usually plays out far from the public eye and happens constantly on servers all over the world. Millions of events take place on computers that could potentially signal malicious activity, and it is the job of carefully-designed software to decipher between the good and the bad. Unluckily for them, missing a single malicious event could mean hundreds of millions of dollars in damage to a company.
A recent study from the Massachusetts Institute of Technology (MIT) suggests that the good guys in that fight may have found a significant advantage. The solution is “analyst-in-the-loop” security model, which involves incorporating human judgment and intuition in threat identification and machine learning process.
However, while this is important news, it is a breakthrough that Chris Rothe, CTO and co-founder of Red Canary, an information security firm based out of Denver, has been using for several years.
“It is encouraging to see this research, even though it is nothing groundbreaking,” says Rothe. “We have been doing this for almost three years now, and our technology has analyzed billions of events and our analysts have reviewed millions of potential threats. We have fed all of that back into our technology and this continues to make our solution more accurate and efficient.”
The aforementioned study makes some remarkable conclusions. For example, when experts are paired with modern machine learning software, threat detection is improved by nearly 3.5 times the industry standard. At the same time, false positives are reduced by 80%. In fact, combining an expert with a machine learning security solution is valuable for a number of reasons.
For starters, an expert can analyze an event that the software flags as being malicious and determine that it is innocuous. In many cases, making that determination requires intuition and perspective, something that a computer cannot gain on its own. So, having an expert can reduce the number of generated false positives.
Similarly, if the software misses the initial installation of a particular malware, it will detect when it is exploited. Then, an expert can follow the crime scene back to its source, educating the software how to detect the problem before further installation. This collaborative process means that on any given day, the security system in place is the most up-to-date, most intelligent, and most responsive program available, as Rothe is doing with his product:
“We are continually refining our detection technology,” says Rothe. “We adjust or remove criteria that leads to false positives. And we retroactively analyze threats we detect to determine if additional detection criteria would have expedited the detection and confirmation process.”
A 2015 study by Hewlett Packard and the Ponemon Institute estimated that the average U.S. firm loses $15 million dollars to cyber crime every year. Being an average, this means that some companies are not very affected, while others suffer huge damages.
Unfortunately, vulnerabilities are only increasing. As more and more devices become connected to the internet, hackers are given more and more ways to breach security systems. This is particularly worrying, given that threats from hackers are also increasing – ransomware is a very good example of that.
The move to incorporate human experts into security solutions makes sense on a very basic level. Hackers are people who think and act like people do. They also have the ability to play offense. Computer software will always be a step behind as even the best machine learning software available needs to see malicious behavior at least once before it recognizes it. Humans can use judgment to make that assessment the first time, which is something that Rothe agrees with:
“Everyone has been chasing ‘silver bullet’ security products for the last decade and research like this should help people wake up and realize that technology cannot replace human expertise and intuition. Technology can significantly enhance a human’s capabilities, and vice versa, but we are not at the point where we can trust AI and machine learning to accurately detect threats.”
Did you like this article?
Get more delivered to your inbox just like it!