CISOs are concerned about new AI pressures - what can they do about it?

Written by Eoin HinchyCo-founder & CEO, Tines

Published on August 12, 2024

The pressure on security teams has never been greater. With an ever-evolving threat landscape, resource constraints, and now the rapid adoption of artificial intelligence (AI) technologies, Chief Information Security Officers (CISOs) are facing unprecedented challenges. 

This was one of the clear takeaways from our recent report CISO perspectives: separating the reality of AI from the hype, in which 53 CISOs shared their opinions and experiences of AI’s impact on their security operations.

94% of CISOs expressed concern that AI will increase pressure on their teams. 

While AI is clearly here to stay and will only become a bigger part of business operations across organizations, there are practical steps that CISOs can take now to manage the additional demands that AI is creating for their employees.

Before we get there, let's take a closer look at why AI is causing CISOs so much concern.

AI for security teams: big promises and even bigger demands 

AI is revolutionizing many aspects of cybersecurity, but it's also adding a new layer of complexity and risk for security teams. 

Fast AI adoption from attackers has left security teams scrambling to catch up, often with cursory vetting of security concerns by the business. In our survey, for example, one respondent said they are seeing AI implemented without clear ownership and decision making processes for how it's used. 

CISOs are also struggling with a shortage of security professionals with AI expertise, putting additional strain on existing teams.

At the same time, AI systems are increasing the attack surface for organizations by introducing new vulnerabilities that need to be fully understood and managed to reduce risks. But without the right personnel in place, this is proving challenging for addressing both cybersecurity and regulatory requirements.

And, as I mentioned, security teams are not the only ones using AI. Cybercriminals are also adopting AI to automate their own malicious acts and launch even more sophisticated attacks, putting even more strain on security teams to anticipate and counteract new cyber threats.

As a result of these and other demands from the rapid adoption of AI, AI is creating additional pressures on security teams that are already under incredible strain. In our Voice of the SOC report, 63% of security professionals reported some form of burnout, with 81% saying their workloads increased during the previous year and 50% saying their security teams are understaffed. 

While there are no quick fixes for the causes of security team stress, we wanted to share some of the strategies that have proved effective for security leaders and members of the community. Let’s dive in.

4 strategies to help ease AI stress on security teams 

Though AI is creating yet another source of pressure on security teams, CISOs can temper the strain that growing AI adoption is creating for security professionals.

1. Invest in AI literacy and skills among the team 

  • Encourage and support certifications in AI and machine learning for team members

  • Foster a culture of continuous learning by forming an AI subject matter expert (SME) group

  • Share resources or create an automated news feed on how AI is being used for cyberattacks 

2. Invest in secure AI-enhanced workflow automation 

  • Use AI-enhanced workflow automation to streamline operations

  • Automate routine tasks to free up analyst time for more complex, strategic work

  • Ensure human oversight to verify the accuracy of AI

3. Tweak vendor due diligence guidelines to include AI 

  • Update evaluation criteria for security vendors to include AI-specific considerations

  • Avoid the demoware trap – test first in low-risk use cases and incrementally build trust in new AI tools

  • Clearly define your goals and create a defined scorecard for assessing AI vendors - our checklist is a great place to start

4. Remain laser-focused on data privacy 

  • Implement strong guardrails for AI tools to reduce the risk of false outputs and improper data usage

  • Understand how AI treats your data - it's best to keep it within your infrastructure and prevent your data from going into training models

  • Ensure compliance with relevant data protection regulations

Ensuring AI benefits outweigh risks for security teams 

While AI undoubtedly adds new pressures to the already challenging world of cybersecurity, it also offers tremendous opportunities for enhancing security operations. 

By taking proactive steps to address AI-related concerns, CISOs can position their teams to benefit from AI while managing the associated risks. Remember, the goal isn't to eliminate pressure entirely but to build resilience and adaptability in the face of evolving challenges.

By investing in AI literacy, embracing secure automation, updating vendor evaluation processes, and prioritizing data privacy, security teams can turn the AI challenge into an opportunity for growth and innovation. As we navigate this new landscape, staying informed, adaptable, and focused on core security principles is key.

How are leading CISOs approaching AI? Read the full results of our survey in our report.