Four common misconceptions about using AI in security operations

Written by Eoin HinchyCo-founder & CEO, Tines

Published on May 15, 2025

At this stage in AI's evolution, we’ve all heard the big promises - and overpromises - from vendors. But what about the people on the front lines of security operations? How are real practitioners feeling about using AI in their day-to-day work?

In a recent webinar with guest speaker Allie Mellen, Principal Analyst at Forrester Research, we dug into how AI is actually being adopted in the SOC - what’s working, what’s not, and what's getting lost in the noise.

According to Voice of Security 2025 research, most security leaders are feeling positive about AI’s potential, especially when it comes to helping defenders move faster. But some have been swept up in what Allie calls “The Blob” - a growing fear that AI will supercharge attacks and render analysts obsolete.

As with most things in security, the reality is more nuanced. So, in this post, we’re going to unpack four of the most common misconceptions about AI in the SOC.

1. AI will fully replace analysts 

The reality: human SecOps teams will be supported by AI.

Job security is one concern that’s top of mind for a lot of practitioners. Rather than eliminating security analysts altogether, AI will automate repetitive processes and optimize existing workflows, giving practitioners more time to:

  • Write detections

  • Investigate security incidents

  • Focus on proactive threat-hunting

  • Specialize in areas they’re interested in

This shift can also help solve some of the organizational challenges that have long plagued the traditional three-tier SOC model.

There’s a lot of talk about how AI will eventually evolve to be human-like, and we’ll have to interact with it accordingly. That’s not what the technology does.

  • Allie Mellen, Principal Analyst, Forrester Research

2. AI is most beneficial for entry-level personnel 

The reality: AI is most beneficial for experienced practitioners who know what they’re doing and want to move faster.

Hallucinations and incorrect answers are a known problem with AI, which is why validation is so critical. But successfully confirming that an AI-generated answer is correct means knowing the answer to the question you’re asking.

While an experienced practitioner can simply confirm an answer and move on, a junior analyst is likely using a chatbot because they don’t know the answer. They may act on incorrect information, or leave a thumbs-up on a wrong answer, which is then used to train the AI.

When talking about how AI will benefit experienced practitioners, we also need to debunk the myth that AI, in its current form, can act as a junior-level assistant, handling processes like alert response and threat intel enrichment from start to finish.

AI won’t fully automate SecOps anytime soon - and likely not in our lifetimes. Right now, one of the best use cases for AI in the SOC is automating the tedious but time-intensive stuff, like writing incident reports. This may not be very exciting, but delegating the right work to AI can save a lot of time and heartache.

3. AI provides full environment visibility 

The reality: AI lacks full data access and contextual knowledge.

Analysts spend a lot of their time adding context to accurately prioritize and process alerts. AI alone doesn’t have this capability.

Before full environment visibility will be possible with LLMs, Allie says we need to solve two complex challenges:

  1. Collecting all the necessary data in one place, on a continuous, automatic basis.

  2. Integrating all security tools in the tech stack and getting them to work together.

With the exception of products like Tines Workbench, LLMs don’t have access to the real-time, proprietary data businesses used to make critical security decisions.

I want to push back on the idea that we want to use AI for environment visibility. Is asking a chatbot about the vulnerabilities in your environment really the best way to get that information?

  • Allie Mellen, Principal Analyst, Forrester Research

4. Chatbots and copilots are the best way to introduce AI to the SOC 

The reality: Using lots of chatbots can create more silos and often isn’t the easiest way to interpret information.

So many practitioners, myself included, have spent the last decade working to break down silos between various technologies and dashboards. All the individual AI copilots out there could make the silo issue worse instead of better.

Chatbots can be useful for certain things, but security runs on well-structured workflows around SOAR, SIM, EDR, and other operations. Historically, talking to chatbots hasn’t been part of those workflows, and that’s unlikely to change in the short term.

There’s a reason the internet isn’t just pages of generated paragraphs, which is all a chatbot can give you - because it’s not the easiest way for humans to interpret information.

  • Allie Mellen, Principal Analyst, Forrester Research

Built by you,
powered by Tines

Already have an account? Log in.