Securely bringing your own AI to Tines

Written by Matt MullerField CISO, Tines

Published on January 15, 2025

At Tines, we take pride in both the flexibility and security of our platform: it’s what allows us to do things like safely connect to any HTTP API in the world, and seamlessly deploy in fully air-gapped environments. Similarly, our AI capabilities have been designed from the ground up to be secure and private, with no logging, internet transit, or training on your data.

When it comes to bringing your own AI model to Tines, while you still retain all the flexibility of the platform, you also gain some new security responsibilities. That’s why we’re sharing our approach to threat modeling and protecting an AI integration with Tines - to ensure that all our customers can use AI with confidence, no matter how the underlying AI model is integrated.

Securing your foundation model 

When talking about a Large Language Model (LLM), most people are generally referring to the foundation model that has been trained on massive amounts of data. OpenAI’s family of GPT models, or Anthropic’s family of Claude models, are all widely-used foundation models that power generative AI tools like ChatGPT and Tines Workbench.

If you use a pre-trained foundation model from OpenAI, Anthropic, Google, Meta, or a similar reputable provider, then the rest of this section isn’t necessary for you. You can (and should) expect that these foundation models are trained with AI threats in mind. From a privacy perspective, you should make sure that your model provider respects your data protection requirements around using your data for training purposes.

If you’re building your own foundation model, it’s important to start with a framework for identifying and mitigating threats. Data source poisoning, inadvertent sensitive data disclosure, system prompt leakage, and prompt injection are all common examples, but not the only things to consider. We recommend using a framework like the OWASP Top 10 for LLM Applications or Google’s Secure AI Framework to build out a comprehensive threat model and set of mitigating controls.

Key recommendations:

  • When using a pre-trained foundation model, ensure your model provider meets your data protection and privacy requirements

  • When building your own foundation model, apply threat frameworks like OWASP Top 10 for LLMs, Google’s Secure AI Framework

Securing your model serving provider 

While there are many ways to integrate an AI model into an application, Tines uses standardized REST APIs, whether we’re serving the underlying foundation model or you are. Securing a model serving provider is no different than securing any other HTTP API. For example, you need to think about authentication, authorization, encryption in transit, denial of service (DOS) attacks, and so forth.

The r