MCP security is non-negotiable for AI-driven organizations

Written by Thomas KinsellaCo-founder and CCO, Tines

Published on August 13, 2025

Model Context Protocol (MCP) is gaining traction because it enables LLMs to interact with live systems and enhance context by retrieving and managing relevant real-time information. LLMs can’t query Salesforce, trigger an Okta password reset, or fetch context from your SIEM, for example. MCP bridges that gap by connecting AI models to real-world APIs, powering AI applications like retrieval-augmented generation and multi-step agent workflows.

They’re fast to deploy. Teams can spin up a server, wrap a few endpoints, and start pulling live data with ease. This quick setup is why security and operations teams are already testing MCP to enrich alerts, fetch records, and search systems.

Convenience alone isn’t enough. Many MCP servers lack scoped access, strong security, and full visibility.

That’s why we’ve released the MCP action, allowing you to create an MCP server through the Tines storyboard. You get the full security, flexibility and control of the Tines platform, including scoped access by team or role, visibility into every action, and human approvals for critical steps. 

This builds on our launch of Workbench last year, which gave teams a secure way to connect AI assistants to real-time and proprietary information. Now that same security and control extends to any client that supports MCP. 

Why MCP security matters  

Researchers from Backslash Security discovered hundreds of MCP servers exposed on the web due to misconfigurations. Many lacked authentication, exposing sensitive data and enabling attacks like remote code execution, context poisoning, and command injection.

As MCP adoption grows, companies must limit data access, enforce strict tenant isolation, maintain detailed logs, and ensure manual oversight. These steps are critical to securing AI workflows built on MCP technology.

As AI connections multiply, the risks become impossible to ignore. Our Field CISO Matt Muller put it well on LinkedIn:

MCP reminds me of the smart home craze. We were promised convenience, instead we got fridges that perform DDoS attacks and front doors that don’t work when the power is out. Without a serious commitment to cybersecurity in the MCP ecosystem, we risk ending up in the same place – only now in an AI-native way.

Where MCP convenience turns risky 

MCP servers are great for quickly connecting LLMs to live tools. But what makes them easy to use also creates security risk, especially at scale, that could damage customer trust. Here’s where the problems start:

Too much access, not enough control 

MCP servers typically have a single configuration. Every user connecting to that server gets the same access whether you have five permissions or fifty. There’s no granular control by role or team.

Furthermore, authentication is still not standardised. OAuth authentication only became a recommendation in March but is not required, and most clients don’t support it.

While common MCP clients like Cursor will support OAuth, others like Claude Desktop support local servers, and Claude AI only supports custom OAuth for customers using Claude for work. OpenAI’s MCP client in ChatGPT does not support OAuth at all.

No clear audit trail and weak authentication 

MCP servers authenticate at the client level, not per user. This means you can’t track who performed specific actions, creating a black box. Credentials are often shared or limited to a URL, with no multi-factor authentication or detailed logging. While OAuth is the ideal solution, many MCP servers have yet to adopt it.

At Tines, we currently improve security by enabling scoped access through rotating credentials, team assignments, or personal teams for individuals.

Roles can also be used to grant access to specific teams. Every request made to an MCP server in Tines also emits an event with the details of the request and response allowing for a full audit trail.

Lack of context for your environment 

Generic MCP servers don’t understand your custom fields or workflows, making them either too limited or too broad. They don’t adapt to your data model or custom Salesforce fields, for example. You either maintain endless custom configurations or settle for generic responses that miss the mark.

Distribution is messy 

Sharing servers or clients across teams adds complexity and risk. Fixing issues often means rebuilding and redistributing configurations. That’s not how enterprise software works.

What all teams need 

Deploying LLMs intelligently requires more than access to tools. They need trust, control, and traceability. Building secure AI workflows requires a platform designed with security at its core.

When you look to build your MCP servers, basic requirements should include:

Scoped access and granular control  

People rarely work in a single tool. Typically they move across multiple systems, and each role needs access to specific actions, not everything. Instead of giving blanket access to a single MCP server, we recommend creating multiple servers and assigning each to the teams or individuals who need them. This ensures you only grant the access that’s needed.

Full control and visibility  

There is no black box. You can see exactly who did what and when. Every action is traceable back to a user and IP address. You get full context around every action and a reliable audit trail.

Human approvals for high-risk steps 

For critical actions, a human has to step in. For example, if someone wants to isolate a host or lock an account, it is important to have a confirmation step to verify that the correct action is being taken. Whether this happens depends on the MCP client in use. Some clients, like Claude Desktop, allow confirmations to be turned off after the first time they are invoked, elevating the risk of unwanted actions taking place.

Workflows that span tools  

Some workflows require multiple steps and multiple tools. If those steps are well defined and repeatable, deterministic workflows will handle them faster and cheaper compared to an agent or copilot, with no surprises and no hallucinations.

With deterministic workflows, you can chain those steps together and run them the same way every time.

Copilots and agents still have a role to play when the path isn’t clear or when steps change based on the situation. In many cases, a Tines Story can act as one of an agent’s skills. The agent chooses the right story, runs it, and returns the result. This matches the right tool to the job.

Centralized updates 

You don’t need to rewrite configuration files for every team. Update a single action once, and that change applies automatically across every team and system in your organization. This eliminates muckwork, ensures consistency, reduces errors, and makes scaling AI-connected workflows much easier.

The Tines platform delivers the essential capabilities you need to protect your data, empower your teams, and maintain compliance at scale.

Scaling AI-connected workflows safely 

MCP servers are opening new doors for AI. They make it easy to connect models to live systems and extend their capabilities beyond static prompts. But ease of use alone does not make something ready. Without guardrails, the same power that makes MCP so appealing can become a liability.

At Tines, we’ve built our platform with a security-first approach because fast-moving teams shouldn't have to choose between innovation and control. 

Most MCP setups today don’t have the security, visibility, or control to protect sensitive systems at scale. As AI workflows evolve, the teams who win will be those who scale fast, stay secure, and build trust into every action. Learn how to harness MCP capabilities securely within the Tines platform.

Built by you,
powered by Tines

Already have an account? Log in.