Artificial intelligence (AI) in cybersecurity was a popular topic at RSA’s virtual conference this year, with good reason. Many tools rely on AI, using it for incident response, detecting spam and phishing and threat hunting. However, while AI security gets the session titles, digging deeper, it is clear that machine learning (ML) is really what makes it work. The reason is simple. ML allows for “high-value predictions that can guide better decisions and smart actions in real-time without humans stepping in.”

Yet, for all ML can do to improve intelligence and help AI security do more, ML has its flaws. ML, and by default AI, is only as smart as people teach it to be. If the AI isn’t learning the right algorithms, it could end up making your defenses weaker. Also, threat actors have the same access to AI and ML tools as defenders do. We are starting to see how attackers use ML to launch attacks, as well as how it can serve as an attack vector. Take a look at the benefits and dangers the experts discussed at RSA.

What Machine Learning Cybersecurity Gets Right

When provided the right data set, ML is good at seeing the big picture of the digital landscape you’re trying to defend. That’s according to Jess Garcia, technical lead with One eSecurity, who presented the RSA session ‘Me, My Adversary & AI: Investigating and Hunting with Machine Learning.’

Among the areas ML is most useful for security purposes are prediction, noise filtering and anomaly detection. “A malicious event tends to be an anomaly,” Garcia says. Defenders can use ML designed to detect anomalies for threat detection and threat hunting.

The size of the dataset matters when programming ML for AI security. As Younghoo Lee, Senior Data Scientist with Sophos, pointed out in the session ‘AI vs AI: Creating Novel Spam and Catching it with Text Generating AI,’ more training data gives better results and pre-trained language models matter for downstream tasks. Lee’s panel focused on spam creation and protections, but the advice applies across ML systems used for cybersecurity.

When Attackers Use ML or AI Security

In the session ‘Evasion, Poisoning, Extraction, and Inference: The Tools to Defend and Evaluate,’ presenters Beat Buesser, research staff member with IBM Research, and Abigail Goldsteen, research staff member with IBM, shared four different adversarial threats against ML. Attackers can use:

  • Evasion: Modify an input to influence a model
  • Poisoning: Add a backdoor to training data
  • Extraction: Steal a proprietary model
  • Inference: Learn about private data

“We’re seeing an increasing number of these real-world threats,” says Buesser. Threat actors use techniques that distort what the ML knows, some of which have life or death fallout for the AI security. One example is attackers who put stickers on a highway, forcing a self-driving vehicle to swerve into oncoming traffic. Another example shows how attackers can modify at-risk ML systems to allow them to bypass security filtering systems to let more phishing emails get through.

Balancing the Pros and Cons

ML systems designed to augment AI security have become a benefit to security teams. More automation means less burnout and more accurate threat detection and repair. However, because threat actors see ML as an attack vector, the team should also know where ML and AI exist within the company or agency beyond their systems. Once familiar with the ML and AI functions, they can learn where potential problems may linger and see how those can become springboards for an attack.

ML and AI security have the potential to change detection and prevention models for the better. You also still need the human touch to ensure ML isn’t causing security problems instead of solving them.

More from Intelligence & Analytics

Hive0051’s large scale malicious operations enabled by synchronized multi-channel DNS fluxing

12 min read - For the last year and a half, IBM X-Force has actively monitored the evolution of Hive0051’s malware capabilities. This Russian threat actor has accelerated its development efforts to support expanding operations since the onset of the Ukraine conflict. Recent analysis identified three key changes to capabilities: an improved multi-channel approach to DNS fluxing, obfuscated multi-stage scripts, and the use of fileless PowerShell variants of the Gamma malware. As of October 2023, IBM X-Force has also observed a significant increase in…

Email campaigns leverage updated DBatLoader to deliver RATs, stealers

11 min read - IBM X-Force has identified new capabilities in DBatLoader malware samples delivered in recent email campaigns, signaling a heightened risk of infection from commodity malware families associated with DBatLoader activity. X-Force has observed nearly two dozen email campaigns since late June leveraging the updated DBatLoader loader to deliver payloads such as Remcos, Warzone, Formbook, and AgentTesla. DBatLoader malware has been used since 2020 by cybercriminals to install commodity malware remote access Trojans (RATs) and infostealers, primarily via malicious spam (malspam). DBatLoader…

New Hive0117 phishing campaign imitates conscription summons to deliver DarkWatchman malware

8 min read - IBM X-Force uncovered a new phishing campaign likely conducted by Hive0117 delivering the fileless malware DarkWatchman, directed at individuals associated with major energy, finance, transport, and software security industries based in Russia, Kazakhstan, Latvia, and Estonia. DarkWatchman malware is capable of keylogging, collecting system information, and deploying secondary payloads. Imitating official correspondence from the Russian government in phishing emails aligns with previous Hive0117 campaigns delivering DarkWatchman malware, and shows a possible significant effort to induce a sense of urgency as…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today