A step-by-step guide for IT and security leaders
Co-founder and CEO, Tines
AI has the potential to revolutionize how businesses operate, but its impact on the enterprise has been underwhelming. Years into the latest wave of AI innovation, very few teams have truly transformed their operations through AI. McKinsey recently reported that just 10% of companies they surveyed had successfully implemented generative AI at scale for any use case.
When we explore why AI has fallen short of expectations, the same challenges surface time and time again. AI adoption is blocked by a small, consistent group of issues - misaligned priorities, underwhelming tools, lack of relevant skills, inflexible (or non-existent) AI policies, evolving regulations, and - most significantly - security and privacy risks.
The good news? For proactive IT and security leaders, security and privacy don’t have to be obstacles. Tackling these specific security challenges head-on, with a security-first approach to implementing AI, can actually help us unlock the technology's full potential.
Balancing innovation and security can feel like threading a very small needle, and it’s easy to see why skepticism and AI hype fatigue persist.
AI undoubtedly introduces complex security challenges, particularly when it comes to safeguarding organizational and customer data. But restricting AI’s access to tools and systems isn’t a perfect solution either. AI’s impact depends on access to proprietary data and the ability to perform tasks on our team’s behalf. While we must tread carefully, we should avoid limiting AI to the extent that it fails to deliver value.
When it comes to adopting AI securely, there’s no one-size-fits-all framework or solution. Every organization’s needs and objectives are different.
At Tines, we’re fortunate to work with some of those exceptional teams that have already achieved game-changing results with AI. Their experiences and successes formed the inspiration behind this guide.
These organizations have succeeded by investing in solutions with robust privacy and security guardrails, ensuring integration with evolving tech stacks, embracing AI alongside workflow orchestration and automation, and involving security and IT in managing AI priorities and risks across the enterprise.
With this guide, we hope to help IT and security leaders make measurable progress toward secure AI adoption. Whether you’re facing a lack of AI governance, struggling to define priorities, or navigating a sea of hyperbolic vendor claims, this guide offers actionable strategies to help you keep moving forward.
You’ll find step-by-step guidance to help you overcome blockers, accurately assess your AI needs, and securely deploy solutions that drive real business value. And we’re not just sharing our perspective - this content is shaped by insights from top CIOs and CISOs, offering advice rooted in real-world experience.
We really hope this guide brings you closer to realizing AI’s value for your organization.
Chapter 1
High potential, limited impact
AI’s impact on the enterprise has been underwhelming so far due to rigid products that fail to connect data across technology stacks and extensive security and privacy concerns.
Pressure to adopt AI from business leaders and employees is growing, but progress is slow thanks to complex challenges like regulatory uncertainty and fragmented decision-making.
Leaders remain cautious about exposing sensitive data or increasing attack surfaces, while evolving regulations make compliance a challenge.
Unauthorized tools and "shadow AI" introduce security blind spots as employees unknowingly share sensitive data in seemingly harmless prompts, which LLMs can collect and store.
AI enables attackers to move faster, increasing pressure on security teams to respond. Advanced processes are needed to keep pace with emerging threats.
While AI is portrayed as transformative for attackers, most threats still rely on familiar tactics like phishing. Overhyping AI’s role can distract already stretched teams from addressing proven vulnerabilities.
Despite the hurdles, some organizations are beginning to unlock the potential of AI in targeted ways. Next, we’ll take a look at an example from the world of security operations.
Security engineers Hela Lucas from Samsara and Kieran Walsh from Ekco share how, in just a few months, they’ve experienced the benefits of using secure and private native AI features within Tines’ workflow orchestration and automation platform.
“AI-powered spam filters have drastically improved the quality of life for our on-call team by reducing false escalations.”
– Hela Lucas, Security Operations Engineer, Samsara
“We use AI to translate technical information into readable language, speeding up response times and ensuring stakeholders understand the issues.”
– Hela Lucas, Security Operations Engineer, Samsara
“AI-generated ticket summaries save us so much time - around 15 minutes per case.”
– Kieran Walsh, SOC Engineer, Ekco
“AI helps provide additional context and explanations in a more human-friendly way, enabling us to focus on more complex tasks.”
– Kieran Walsh, SOC Engineer, Ekco
“AI has already made things a lot quicker. The time to onboard and integrate a new toolset into our analyst’s tech stack is now just five minutes.”
– Kieran Walsh, SOC Engineer, Ekco
Recent surveys by Salesforce and Tines highlight the most pressing blockers for IT and security leaders.
Shared challenges, distinct priorities: While CIOs and CISOs share common concerns about security, privacy, and skills gaps in AI adoption, their priorities diverge in key areas. CIOs struggle more with identifying high-value use cases and proving ROI, whereas CISOs are more preoccupied with tackling misaligned priorities and addressing the limitations of inflexible technologies.
Blocker 1
Blocker 2
Blocker 3
Blocker 4
Blocker 5
Blocker 1
Blocker 2
Blocker 3
Blocker 4
Blocker 5
Chapter 2
Before security and IT leaders can begin their AI adoption journey, they need to establish which stakeholders will be involved and what policies and processes need to be considered. They need to decide who has ultimate authority and how decisions will be made.
Given that 77% of CISOs cite "policies and perceptions" as a blocker to AI adoption, ensuring alignment across business units through clear charters and protocols is crucial.
“It’s so important that the CISO gets a seat at the table, in terms of creating an AI Center of Excellence, especially when we're thinking about the increased security risks. It's not just about the bad actors knocking on our door, but it is also securing all layers of the LLM, as well as the automation behind it - think holistic aspect of the AI lifecycle, not silos.”
- Gina Yaccone, Regional and Advisory CISO, Trace3
As AI becomes increasingly integrated into business operations, understanding and mitigating potential risks is crucial to protecting organizational assets, maintaining regulatory compliance, and ensuring quality AI model data.
These risks reflect the challenges faced by technology leaders. As we saw in a previous section, CIOs cite issues such as "security and privacy threats" (57%) and "lack of trusted data" (52%), while CISOs emphasize "data privacy" (66%) and "misaligned priorities" (51%). Aligning leadership perspectives early can help expedite the process of identifying and addressing these risks effectively.
Organizations must navigate a complex web of emerging AI regulations that vary across industries and geographies. This involves understanding legal frameworks such as:
AI systems are fundamentally data-driven, making robust data protection critical. Leaders must protect against:
The "black box" nature of many AI models presents significant ethical and operational risks. These risks include:
Comprehensive risk management requires a holistic and proactive approach. Security and IT teams should:
Effective AI risk management demands clear organizational guidelines. Leaders should consider:
“I’ve come across a couple of organizations that are fascinating in their ability to prescribe what AI models certain business units can use. For example, in HR, you use this model to help support your decision-making, and if you need a new one, you can go through our selection process. So you have a policy that team members will feel informed by, enables them with the right technology for the right use case, and helps them feel secure and confident while they use it.”
- Matt Hillary, VP, Security, and CISO, Drata
Organizations must develop dependable frameworks that protect both tech assets and organizational integrity while maximizing the transformative potential of AI technologies. Let’s explore key best practices for achieving this balance.
Begin with controlled, low-risk AI implementations that allow for gradual learning and refinement. Pilot projects enable organizations to identify potential vulnerabilities, assess performance, and develop sophisticated governance mechanisms without exposing critical systems to widespread risk.
Equip your team with prompt engineering expertise to enhance the reliability and predictability of AI systems. Well-designed prompts can dramatically reduce unintended outputs, mitigate potential security risks, and optimize AI performance across a wide range of applications.
Maintain critical human oversight in AI decision-making processes. Human judgment provides essential context, drives ethical decision-making, and nuanced understanding that AI cannot independently replicate, serving as a crucial safeguard against potential errors or biased outcomes.
Create robust technical and procedural constraints that guide AI behavior within predetermined boundaries. These guardrails include essential measures to prevent AI-generated "hallucinations" or false outputs, ensure compliance with data privacy regulations, and mitigate legal risks associated with AI decisions and actions.
Develop real-time monitoring systems that track AI performance, detect anomalies, and enable immediate intervention. Advanced monitoring can identify potential security breaches, performance degradation, or unexpected behavioral patterns.
Implement advanced cybersecurity measures specifically designed for AI systems, including encryption of training data and model parameters, secure access controls, and regular AI security audits.
Create organization-wide AI literacy and security awareness programs. Educate employees about potential risks, ethical considerations, and best practices for responsible AI interaction and management.
Design AI systems and governance frameworks that can quickly adapt to emerging technologies, regulatory changes, and evolving security landscapes. Build architectural flexibility that allows for seamless workflows.
“I think it's important to push your vendors on their AI philosophy, especially when you think about AI for security operations. Most of the foundational models, if you read through their documentation, say, ‘Hey, be careful, our AI can and will hallucinate.’ The problem is that AI won't tell you if it's hallucinating, it'll just do it. So you need to have a good story around falling back to a human or sanity-checking the output of an AI. If your vendor doesn't have a good answer around that, that's going to be a little bit of a red flag.”
- Matt Muller, Field CISO, Tines
Organizations must approach AI implementation with a clear understanding of their strategic objectives and a well-defined framework for measuring success. This systematic approach ensures that AI investments align with business goals to deliver measurable improvements.
Before implementing AI solutions, organizations should:
Create specific, measurable objectives across multiple timeframes:
6-month milestones
12-month milestones
24-month milestones
Monitor success through diverse metrics:
Quantitative Measures
Qualitative Indicators
“The most important thing to accomplish in the early stages of adoption is to reach agreement with business leaders about how to measure AI success. IT leaders may focus on things like time savings or productivity improvements, whereas business leaders may be far more concerned about reducing time to purchase or increasing customer promoter scores.”
- Mark Settle, Seven-time CIO
Sorting through an increasingly complex AI vendor landscape is no easy task but they can be broken down into three basic categories:
Assessing AI tools requires a methodical approach, and, ideally, a standardized scorecard to determine the best fit for a given use case. Consider these attributes when evaluating new AI tools:
Trust is crucial. Seek AI tools with robust security measures, preferably within your infrastructure. Investigate data handling, training processes, and alignment with your company's AI policy and compliance regulations.
Avoid purchasing flashy tools without clear purpose. Define specific business challenges, compare unique benefits over personal AI solutions like ChatGPT, and carefully assess pricing models. At the very least, AI should provide meaningful assistance in automating routine tasks.
Recognize that no AI tool is 100% perfect. Understand expected accuracy rates, potential false positives, and bias reduction efforts. Maintain human oversight and consider trialing the product or speaking with existing customers.
Evaluate AI response times during live demos. Slow AI can hinder team productivity, so ensure the tool provides timely, efficient outputs.
Choose tools that can grow with your organization. Request metrics on data input volume, output capacity, error rates, and performance under expanding team needs.
Assess user-friendliness and prompt engineering requirements. Seek tools that are intuitive to all users. Using AI should enhance process effectiveness rather than add unnecessary complexity.
Look for tools offering multiple models for different task complexities. The ability to switch between models can significantly impact ROI and performance.
Before you contact any vendors, establish the driving force behind your search. Are you looking for a tool that seamlessly integrates with your workflows and helps your teams work faster? Reduces barriers to entry? Set measurable goals and use those goals to create a wish list to use in interactions with vendors.
It’s not hard to create an impressive demo of a less-than-impressive AI tool. Demos are your biggest opportunity to learn how the tool operates in real time, so ask the vendor to demo some prompts that your team would actually use. If something seems too good to be true, trust your instincts and challenge the vendor with additional questions or specific requests that demonstrate the tool's power under real-world conditions.
That magical AI functionality that demoed so well? Time to put it through its paces! If you decide to progress to a proof of concept (POC), be sure to choose a high-impact workflow that closely mimics the types of tasks you want to automate – a good vendor will be excited by the challenge!
As you narrow down your options, consider the pricing model, not just the price tag. Is there a flat fee for AI capabilities or does the cost increase with usage? If so, find out what limits apply and how the incurred costs will be tracked, so there are no surprises.
“I see a desire for more transparency in the AI space. From a product perspective, it's about being explicit and letting customers use what works best and what's approved by them and helps their environment. It’s not about [vendors] dictating to customers what needs to be there.”
- Mandy Andress, CISO, Elastic
A well-designed AI pilot program serves as a critical first step in understanding the potential benefits, risks, and operational implications of AI tools while maintaining robust security protocols. Your pilot program should prioritize organizational learning, tech assessment, and risk management.
A controlled setup allows organizations to learn and refine their AI approach before wider implementation. This approach provides a low-risk environment to:
Select a specific, well-defined use case that aligns with organizational goals.
Relevant, high-quality data is crucial for AI success. Conduct a thorough data audit to verify:
Before turning on AI technology, be sure to carry out security due diligence. Some key checklist items include:
Implement robust monitoring mechanisms to track AI tool performance, accuracy, and potential security anomalies.
Create multiple channels for stakeholders to provide insights, concerns, and observations. Develop anonymous and transparent feedback mechanisms, and include perspectives from:
Recognize that initial implementations will have limitations and potential failures.
Maintain clear records of your pilot program.
A well-executed AI pilot program is more than a technological experiment - it's a strategic approach to understanding and safely integrating AI into your organization.
“It's critical to not place 'all your eggs in one basket' in experimenting with initial use cases. You want to spread your bets across a cross-section of opportunities and not simply hope for the best with one or two initial prototyping experiments. Business leaders may be enthralled by AI at the moment but experience has shown that their enthusiasm for new technology quickly evaporates if it fails to produce immediate tangible results.”
- Mark Settle, Seven-time CIO
If you’ve completed steps one through seven, don’t overlook the importance of reporting on the results of your pilot program.
Use the priorities, goals, and KPIs established in Step 4 to assess the report and provide the results to stakeholders. Based on the outcomes, start the process again at Step 7 to pilot a new use case or AI tool. Repeat the cycle and iterate as necessary.
Conclusion
AI has the potential to revolutionize how enterprises operate. But successful AI implementation is not always as seamless as some vendors might claim. And AI is undeniably complicating the process of protecting an organization’s data, mitigating risk, and ensuring compliance with evolving regulations.
By adopting a disciplined approach to AI governance and security practices, organizations can not only realize AI’s potential but also enhance their overall security posture in the process.
Organizations that are thoughtful in establishing AI leadership, policies, and security protocols will be able to thrive without creating new risks, helping the enterprise adopt AI technologies with speed and confidence.
At Tines, we’re committed to delivering meaningful AI applications that build on the robust capabilities of workflow orchestration and automation. This includes native AI features that are secure and private by design yet powerful enough to drive meaningful business outcomes, and products like Workbench - a Tines-powered AI chat interface where you can take action and access proprietary data in real-time, privately and securely.
This approach ensures AI becomes an IT leader’s trusted ally rather than a source of new challenges.
In our conversations with Elastic’s CISO Mandy Andress, she reflected, “I want to be able to look back five, eight, 10 years from now and look at this as the dark ages of security. How did we even operate? We do things so differently now. How did we even succeed with how we did things back then?”
This is the future state we’re excited about - a future where AI has proven its value, seamlessly integrated across all business units as an indispensable tool for enhancing team effectiveness.
Achieving this outcome begins with the steps your organization takes today, tomorrow, and over the coming year - steps that position AI as a catalyst for innovation while maintaining the highest standards of security and trust.
Learn more about AI in Tines and Tines Workbench at tines.com/ai.