In our recent webinar, Is the Future of the SOC Autonomous?, guest speaker Allie Mellen, Principal Analyst at Forrester Research, joined Tines CEO Eoin Hinchy to explore the future of SOC automation. The conversation sparked more questions than we had time to answer — so here, Allie follows up on a few we couldn’t address live.
Q: If genAI does the reporting, what is the value added for the SOC analyst?
A: Handling the reporting is one of the main value adds for the SOC analyst. Most security analysts don’t want to spend time writing up the report after the incident — it can take an hour or more to do so. Generative AI (genAI) can quickly create this report for the analyst. In addition to this, genAI is being used to interpret potentially malicious scripts, which is very helpful as it saves the analyst time. It is also being used for language translation — say from English to Japanese or vice versa — which helps global teams coordinate and converse in the language they are the most comfortable in.
Q: Does genAI need to be agentic for it to be effective at doing SOC-type work?
A: It depends on the type of work being done. Generative AI would not necessarily need to be agentic to do things like automated triage, but it needs to be task-oriented and agent-based.
Forrester is seeing many implementations focus on one agent doing one specific task very well as opposed to being a jack-of-all-trades, which is helping to improve accuracy.
RAG is especially helpful when you want to pull in organization-specific context; for example, incident response procedures that the AI needs to be influenced by for a specific action. Agentic is when multiple agents interact with one another independently. We have yet to see this in implementation and it is likely this will not be implemented for another year at least due to trust and safety concerns, despite the marketing messages.
Q: How do you generally feel about using genAI for critical activities? Writing reports is fine, but could using it to lead an investigation and find evidence lead to false negatives or false positives?
A: This is an excellent point. Every single thing that AI does needs to be validated by the analyst. At the end of the day, the analyst is the one responsible for the actions they take, not the AI — and as such, it’s critical that the analyst fully understands what the AI has done. While it can be useful for the generative AI agent to find evidence or triage an alert, the analyst must review the results of that analysis and confirm it is accurate — as they would do if any other person did the work.
Q: The “autonomous SOC” at the moment focuses on initial triage, but what comes next? Are there other areas you have seen get more depth for these tools other than the methods mentioned?
A: The next level of generative AI capabilities that provide support is the task-specific, agent-focused approach. Users should not need to build out prompts unless they are building specific workflows or want to get a specific query — ideally, the result will be presented in the platform already. Task-specific agents can do this by automating things like initial triage of alerts or some aspects of investigation.
The next step is to build out these agents with a level of accuracy and with the appropriate explainability such that we can be confident in the output.
Q: Do you believe it is real to have 24/7 analyst SOC shifts without an L1 to L3 structure? Is it economically feasible?
A: Forrester sees many teams succeeding without the L1 to L3 structure. This is due to a few reasons. First, by making sure every analyst takes every alert from start to finish, they learn quickly on the job as to what they need to do and why. This enables them to take their skills to the next level, which can only be done through hands-on work. It also builds up your bench of talent such that if a more veteran team member leaves, you have other analysts that can step in to fill the void and don’t need to be trained up in the moment. It also enables more experienced analysts to mentor newer staff, which is very beneficial for both new and veteran analysts — it helps veterans expand their contribution and take on a leadership role on the team. Lastly, unfortunately, the reality of many SOCs is that analysts build up their skills then leave to pursue other opportunities.
Instead of building an SOC to maintain, we recommend building an SOC for the reality of security operations today — one that is meant to train and uplevel.
This will also aid with retention, especially when combined with prioritizing analyst experience.
Q: As a Security GRC Analyst at a SaaS company, I think not all changes in SOC or international standards are to be confused with more work or training — some actually makes certain controls easier to follow and implement. There is a need for better management on a wider scale than diving into one change making it bigger than it is.
Allie Mellen: I think there may be confusion on the topic here. When we say SOC, we are not talking about compliance frameworks like SOC 2 or SOX, we are talking about a security operations center (SOC). Security operations is unfortunately still a very manual process and, because of that, automation is fundamental. Training is also fundamental but is sorely lacking for security analysts in the security operations center. We see that, according to data from Microsoft, it takes 40 hours of user training to start effectively using Security Copilot. That is a significant amount of time to spend just to learn how to use a tool, and we haven’t seen the ROI to justify that amount of time just yet.
Watch the full webinar on-demand.