Plato Data Intelligence.
Vertical Search & Ai.

Putting AI to Practical Use in Cybersecurity

Date:

The shortcomings of artificial intelligence tools in the cybersecurity world have drawn a lot of attention. But does the bad press mean that AI isn’t working? Or is AI just getting slammed for failing to meet overinflated expectations?

It’s time to take a hard look at what AI is accomplishing before kicking it to the curb.

Where Cyber AI Is Winning

There’s never been a superhero who hasn’t gone to the dark side or fallen off their pedestal. AI is no different. But if you know where AI performs well, you’ll have a better idea of how to test vendor AI claims.

“Machine learning/AI technologies have been influencing information security for a long time. Spam detection or preventing fraudulent transactions are just two of many examples of successful AI applications in security today,” says Alexandra Murzina, a machine learning engineer and data scientist at cybersecurity firm Positive Technologies.

The seasoned security pros we interviewed for this story praised AI for its successes in tasks such as these (but there are many more):

Backend event processing. AI is performing well here but hasn’t yet been loosed to take care of business on its own. “AI is performing well in backend processing of security events, allowing for automation and speed of use-case development. However, the linkage between the analytics capability and immediate action controlled solely by AI hasn’t matured enough for wide adoption across industries,” says Doug Saylors, partner and cybersecurity co-lead with global technology research and advisory firm ISG.

Super-secret, in-your-face invisible stuff. “AI is playing an integral role in cybersecurity, but that role may be a bit more understated or even invisible than the hype around AI might suggest,” says Fred Cate, professor of law and adjunct professor of informatics and computing at Indiana University.

Cate advises you look around to spot where AI is operating well but quietly, such as biometrics on mobile phones, catching fraudulent charges on a credit card or fraudulent network log-in attempts, or blocking phishing messages on an email service.

Detecting novel malicious code. “An example metric we have is that file-based classifiers built 34 months ago and without any updates are on average able to detect most high-profile malware samples that emerge today,” says Travis Rosiek, chief technology and strategy officer for BluVector, a Comcast-owned cyberthreat detection company.

“Imagine what else security teams could do with less emphasis on pushing and validating malware signature updates on a regular basis across a complex enterprise,” Rosiek adds.

Permission management. Permission management is an obstacle to business users and often a vulnerability. “AI shows its efficacy here through several vendor offerings. When a user attempts an action and is stymied, AI can reason just as a human permission manager might,” says Joel Fulton, CEO of Lucidum, an asset discovery and management platform provider.

Cyber asset attack surface management (CAASM). These systems identify, track, and monitor all the places in an organization where data is stored, processed, or transmitted. AI can catch and analyze attacks on the fly. This is crucial because “in modern environments, ephemeral cloud assets turn on and off in minutes, work-from-home devices are hidden from view, and data centers are full of dusty corners,” says Rosiek.

Extended detection and response (XDR). AI is still evolving here, but it’s holding its own. “In what’s being called XDR, AI/ML is just another tool in the toolbox to find anomalies, methods of attack that aren’t caught by traditional defense-in-depth technologies,” says Patrick Orzechowski, vice president and distinguished engineer at managed cybersecurity vendor Deepwatch.

Anything simple, repetitious, and done at huge scale.
Only a fool would profess they can protect IoT threat surfaces with grit and a few ordinary tools. “In cybersecurity, this is best reflected in areas such as intrusion detection and network monitoring — it’s fairly safe for administrators to allow AI to discover activity that is an outlier and may be malicious in these cases. Even then, however, I would caution admins to implement manual, human review into their processes,” says Sean O’Brien, founder and lead researcher at Privacy Lab at Yale and CSO at privacy-focused chat company Panquake.

It’s All in the Implementation

In the final analysis, the buyer should beware when buying a cybersecurity product touting “AI inside.” But don’t shy away from AI — every cybersecurity team needs that kind of reach and scale to deal with an ever-expanding attack surface.

“So far AI hasn’t been as much of a game-changer as a game-enhancer. But I wouldn’t at all give up on the promise for a bigger impact in the future,” says Cate.

Just don’t think that you’re going to get AI to work without any work on your and your team’s part.

Cyber AI is “very hard,” warns Aaron Sant-Miller, chief data scientist at consulting firm Booz Allen Hamilton, but it is key to building effective defenses.

“It’s very important for organizations to be patient with AI efforts as they identify the required steps to building viable, sustainable, and impactful AI capabilities. This will require additional work from cyber teams as both groups work together to identify use cases, refine how AI can be embedded into existing tools, and provide feedback to AI systems as they begin to make detections. Buy-in is critical and continuous participation is essential to creating impactful, operational cyber AI,” says Sant-Miller.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?