AI in Cybersecurity: Between Breakthrough and Blind Spots

By Radmila Blazheska | Industry Feature

AI in Cybersecurity: Between Breakthrough and Blind Spots



Artificial Intelligence has worked its way into the core language of cybersecurity. It's used to sell platforms, justify budgets, and reshape the structure of SOCs. For executives, it's the promise of smarter defense. For vendors, it’s a product differentiator. But for IT and cyber professionals, those tasked with actually deploying, maintaining, and securing AI-enabled systems, the hype can obscure the hard questions.

The potential is real. AI can improve threat detection, speed up response, and bring clarity to overwhelming volumes of security data. But as with any powerful tool, success lies in understanding its limits, and what’s required to use it responsibly and effectively.

How AI Actually Works in Security Contexts

In practical terms, most AI in cybersecurity boils down to one of a few common functions:

  • Anomaly Detection: Identifying events or behavior that deviate from established norms, such as unexpected logins, unusual file access patterns, or erratic network activity.
  • Classification: Sorting data, like emails, binaries, or traffic flows—into buckets such as "benign" or "malicious" using trained models.
  • Prediction: Estimating the likelihood of events like phishing attacks or privilege escalation based on historic indicators.
  • Pattern Recognition at Scale: AI can process logs, telemetry, and alerts faster than any human team, connecting disparate signals across environments.

These capabilities support more advanced tools such as AI-enhanced SIEMs, SOAR platforms, and behavioral analytics systems. Some use supervised machine learning (trained on labelled data); others rely on unsupervised models that learn behavior profiles over time. A growing number of systems integrate Natural Language Processing (NLP) to digest threat intelligence reports, incident narratives, or even dark web chatter.

When tuned well and fed with quality data, these tools can surface threats that traditional rule-based systems might miss. But they also introduce layers of complexity that aren’t always visible from the surface.

The Hard Part: Implementation in Real-World Environments

Deploying AI successfully isn’t about just enabling a module or installing a new platform. It’s about building an environment where the models have something useful to work with, and where their output leads to trusted, actionable outcomes.

One major roadblock is data quality and consistency. AI models don’t magically fix bad data—they amplify it. In cybersecurity environments, data is often siloed across EDRs, firewalls, application logs, and network tools. Formats differ. Labelling is spotty. And privacy concerns may prevent sensitive data from being used at all.

False positives are the inevitable result when models are trained on poor or narrow datasets. The AI might flag a legitimate admin action as a threat—or worse, fail to catch a stealthy attack because it doesn’t fit the learned pattern. If the SOC team stops trusting the system, its value drops to zero.

Then there’s integration. AI models need data pipelines, context, and some form of operational feedback loop. But many organisations—especially those with older infrastructure, aren’t ready for that. Legacy systems may not generate clean telemetry. APIs may be missing or unreliable. And automation often bumps into fragile environments or human approval workflows that weren’t designed for speed.

Without investment in architecture and interoperability, AI ends up isolated. The models might generate good insights, but they never make it into incident response or policy enforcement.

The Talent Bottleneck and Tool Drift

Another challenge is that many security teams aren’t staffed to support AI properly. It’s not just about knowing how the models work. It’s about understanding how to tune, retrain, interpret, and validate them in a security context.

AI models are not static. They drift. They decay. Threat patterns change, and so does infrastructure. A model trained on 2022 telemetry won’t hold up in 2025 cloud environments without regular updates.

And yet, most teams don’t have dedicated roles for this kind of model maintenance. According to the UK Cyber Security Skills Report, more than half of organisations lack professionals with AI and cybersecurity experience. Even among well-funded security teams, the skill overlap is thin.

This leads to another common pattern: underused or misused AI tools. A tool gets deployed, alerts start firing, but no one fully trusts or understands it. Over time, it becomes background noise—another platform in the stack that’s more checkbox than capability.

Reducing Cyber Risk with Threat Intelligence Webinar



Ethics, Oversight, and Legal Risk

The ethical layer is often underestimated. AI decisions in cybersecurity don’t just affect firewalls, they affect people.

If an AI system misclassifies a user as a threat and locks their account, who’s accountable? What if it only flags certain user behavior patterns that happen to correlate with specific demographics, locations, or job roles?

Under UK GDPR and similar frameworks, automated decisions that affect individuals require transparency, fairness, and accountability. But most commercial AI systems provide very little visibility into how decisions are made. Scores are shown, just not the reasoning behind them.

The Information Commissioner’s Office has repeatedly warned against opaque systems making high-stakes decisions, particularly in contexts involving surveillance or behavioral profiling. In regulated sectors like finance and healthcare, this becomes a compliance risk as well as a security one.

Organisations deploying AI need governance. That includes:

  • Clear documentation of what the AI is doing and why
  • Review mechanisms to audit decisions and override them when necessary
  • Policies for data minimisation and privacy impact assessments
  • Transparent communication with affected users or stakeholders


Without this, even a well-meaning system can cross legal and ethical lines.

AI Is a New Attack Surface

There’s also a growing recognition that AI systems can be targeted.

Just as traditional software has vulnerabilities, AI models can be manipulated. Adversaries can:

  • Poison the training data, injecting noise or subtle bias that affects model performance
  • Craft adversarial inputs—for instance, malware files that bypass classification by exploiting how the model interprets features
  • Reverse-engineer model outputs to map decision boundaries and game the system


The ENISA AI Threat Landscape Report calls out these risks explicitly, warning that security models themselves must now be part of the attack surface inventory.

This means that security teams must go beyond standard configuration hardening. They need to:

  • Monitor for unexpected model behavior
  • Validate model inputs for integrity
  • Build threat models that include adversarial AI scenarios


These aren’t typical SOC functions—but they’ll become more common as AI continues to move into core security operations.

Building AI Into Cybersecurity the Right Way


AI can be a powerful force multiplier—but only when deployed with care, context, and clarity. Here’s what that looks like in practice:

Start with the right use cases.
Focus on areas where AI improves visibility, reduces alert fatigue, or speeds up triage—not where the cost of error is high or human context is essential.

Clean your data first.
Before deploying AI, make sure the logs, metadata, and threat telemetry it relies on are high quality and complete. Bad input guarantees bad output.

Treat AI like any critical system.
That means version control, patching, monitoring, incident response plans, and audit logs, not just an install wizard.

Explain the outputs.
Your team should be able to justify and trace any decision made by the AI, especially if that decision leads to a security action.

Pilot, measure, refine.
Don’t jump straight to enterprise-wide deployment. Start small. Run tests. Track performance. Adjust and improve before scaling.

Train the humans.
Build confidence by equipping your team with the knowledge to evaluate, fine-tune, and challenge the AI. The best security outcomes still come from human-machine collaboration.

Final Thought

AI isn’t a silver bullet. It’s not going to replace seasoned analysts or solve deeply rooted infrastructure issues. But it canenhance visibility, streamline response, and help teams do more with less, if used thoughtfully.

For Scottish organisations—and those beyond—it’s not about chasing the trend. It’s about understanding the technology, preparing for its risks, and using it as a tool, not a crutch. The decisions made today about how AI is introduced into security environments will shape resilience for years to come.

Because in the end, AI doesn’t protect systems. People do, with the right tools, the right data, and the right mindset.

Reducing Cyber Risk with Threat Intelligence Webinar


Sources

  • National Cyber Security Centre (NCSC): www.ncsc.gov.uk
  • UK Cyber Security Skills Report, 2023
  • ENISA – AI Threat Landscape Report, 2024
  • Information Commissioner’s Office – AI Auditing Framework
  • World Economic Forum – Global Cybersecurity Outlook, 2024
  • MITRE ATT&CK Framework: attack.mitre.org · The Data Lab Scotland: www.thedatalab.com

Related Articles

CONTACT US TODAY:

Back to start
aberdeen skyline graphic
x