terena_bell
Contributing writer

Why AI-based threat detection hasn’t taken over the market … yet

Feature
16 Jan 20196 mins

Vendors, analysts and buyers differ on why threat detection AI isn't more widely adopted, but they agree that it needs to provide better insight to its inner workings.

virtual brain / digital mind / artificial intelligence / machine learning / neural network
Credit: MetamorWorks / Getty Images

According to Nicole Eagan, CEO of software company Darktrace, only two out of every ten cybersecurity experts typically embrace artificial intelligence (AI) as a key component of threat detection. The others, she explains, tend to be “totally resistant” or agree to “give [AI] a try” but don’t put in the effort required to make the most of the tech post-purchase.

Granted, information security professionals are known to be risk-averse, which has the flip side of sometimes making them resistant to try out new tech — and for good reason: Protecting the company against risk is the number one job. Yet, theoretically, AI has the potential to more quickly identify a larger number of problems. So why doesn’t every security team use it?

Mike Small, senior analyst for research firm KuppingerCole, believes many actually do — they just might not think of it as AI. Darktrace and competitors like Senseon and SecBI perform threat detection on a higher level than traditional antivirus software. But, Small says, “What they are doing is not, in a sense, completely unique.” At its core, he explains, threat detection AI is a heightened form of behavioral analytics that looks for patterns to identify possible threats and vulnerabilities. All the big cybersecurity platforms like Symantec and McAfee have this general type of technology already rolled in.

“[Teams] are buying things for the outcome, rather than the technology,” Small explains, and the outcome many get from behavioral matching embedded in larger tools works just fine: Last year, Forbes reported that McAfee makes more than $2.5 billion a year. In comparison, Darktrace sold upwards of $400 million.

While it isn’t realistic to expect a specialized industry tool to sell as much as a household-name platform, the numbers show that when it comes to threat detection, standalone AI hasn’t taken over the market yet. So, what are the real roadblocks? It depends on who you ask. Vendor Eagan, expert Small, and a buyer, Eric Gauthier, all give completely different answers.

Vendor view: Resistance to change slows threat detection AI adoption

On a December 5 panel at New York conference AI Summit, Eagan pointed to that resistance to change: “The industry of cybersecurity has been around for, let’s say, roughly 30 years” and, as a result, has its share of “very savvy practitioners” used to working with “certain tools and certain methodologies and processes,” she explained.

However, Eagan says, “We found it had less to do with age” compared to “open mindedness and curiosity for those who engage.” In other words, those who want to push boundaries and try out new tech will.

Analyst view: Threat detection AI hard to explain

Small says it’s not that easy, claiming the standalone threat detection AI in today’s market simply doesn’t solve today’s information security problems. Take security theater, for example. Small says, “[AI is] like a black box” and when it comes to whether or not flagged issues are genuine — as opposed to false alarms, “all you get out of it is it either gives you the right answer or it gives you the wrong answer. And if it gives you the wrong answer, then you don’t know why.”

Without this why, he continues, cybersecurity professionals are helpless in explaining their decisions to the press should a breach occur. And in an age of increasing data security litigation, this limitation also makes it harder for security to defend its decisions in a courtroom. “That,” according to Small, “is the limiting factor.”

Buyer view: Can’t react to extra data from threat detection AI

Gauthier, director of technology and information security officer for HR company Scout, is less worried about publicity and suits. He hasn’t bought the tech because Scout doesn’t have enough manpower to make the most of it. “We’re a smaller company,” he explains, “In the case of some of these quote unquote AI-driven threat protection platforms — or just a lot of the threat intelligence feeds which seem to be popular now — they’re giving you more information, but it’s sort of a second tier of information.” Meanwhile, Gauthier continues, “We barely have the staff to handle those primary threats, which are very actionable and very real.”

What this comes down to, Gauthier says, is cost-benefit, acknowledging that while threat detection AI does offer more than catch-all platforms, “If I’m going to pay for this extra data, if I can’t take action for it from it, I’m really not getting value from it.” Monitoring the AI’s output would take a larger workforce than Scout has, making purchasing less about the tech and more about the ability to act on it. “You need to be at a certain scale to afford it,” he says.

This, Eagan agrees, is a problem Darktrace saw coming before the tool was even built — and one her company worked to prevent: “When we founded Darktrace,” she explained, “we said, ‘We’re going to limit our market if we can only sell to people who are going to hire data scientists and figure this stuff out.” As a result, they told developers, “We need to make it self-learning, self-maintaining, so our customers don’t have to hire any more people. In fact, they want to hire less security people with this — not more.”

Threat detection AI must provide insight, not just information

For that to happen — with Darktrace or any other vendor — Small and Gauthier both say the technology must provide insight. Right now, Gauthier says AI would “give me information but then I’ve got to go figure out what it means.” As long as threat detection limits itself to its current behavioral approach, Small says, it never will.

Instead, Small recommends, AI vendors should ask how to get the tech to the next level: explaining why a threat is likely genuine — a problem he contends only IBM QRadar Advisor is working on right now. “That is a much more complicated thing,” he says —“being able to analyze after the event what all the indicators were, what had happened, whether or not there is still something going on, and has it happened to somebody else and where is the problem.” AI this insightful would definitely be harder to build, he admits — but much easier to sell.

Exit mobile version