What is AI?
Artificial intelligence (AI) is technology that allows computers to perform tasks that typically require human intelligence. This includes things like understanding language, recognizing patterns, making decisions, and solving problems.
You've probably interacted with AI more than you realize. When your email filters out spam, when a streaming service recommends a show you might like, or when your phone's voice assistant answers a question, that's most likely AI at work.
At its core, AI is about teaching computers to learn from data and make predictions or decisions based on what they've learned. Instead of programming a computer with explicit instructions for every possible scenario, AI systems are trained on examples. The system learns patterns from those examples and applies that learning to new situations.
In the context of Tines, AI helps you orchestrate and automate tasks that would be difficult or time-consuming to handle with traditional programming. It can analyze unstructured data, make intelligent decisions, and generate human-readable content, all within your workflows.
How do LLMs work?
Large language models (LLMs) are a specific type of AI that specializes in understanding and generating human language.
An LLM is a sophisticated system that has been trained on enormous amounts of text from books, articles, websites, and other sources. During training, the model learns the statistical relationships between words, phrases, and concepts. It learns which words tend to appear together, how sentences are structured, and how ideas connect.
When you give an LLM a prompt (like an instruction or question), it processes that input and predicts what should come next. It's essentially asking itself, “Based on everything I've learned, what's the most appropriate response to this input?” The model generates text one piece at a time, with each new word or phrase influenced by what came before.
In Tines, LLMs power features like the AI Agent action and Automatic mode in Event Transform. They help you process data, make decisions, take action, and generate content without requiring you to write complex code.
Understand tokens and context windows
To work effectively with LLMs, it helps to understand how they actually "read" text. LLMs don't process language the way you do. When you read a sentence, you see whole words and understand their meaning. LLMs break text down into smaller units called tokens.
A token can be a whole word, part of a word, or even just a few characters. Common words like "the" or "is" are usually single tokens. Longer or less common words might be split into multiple tokens. For example, "automation" might become two tokens: "auto" and "mation." Numbers, punctuation, and spaces also count as tokens.
Why does this matter? LLMs have a limit on the number of tokens they can process at once. This limit is called the context window.
LLMs work the same way. If you send an LLM a prompt that's 1,000 tokens long, and its context window is 4,000 tokens, it has room for about 3,000 tokens in its response. But if your prompt is 10,000 tokens and the context window is only 4,000 tokens, the LLM can't process all of your input at once.
Tokens vs. credits in Tines
It's important to understand that tokens and credits are different concepts:
Tokens are a technical measure used by AI models. They represent how the AI breaks down and processes text. Tokens are an industry standard concept that applies to all LLMs, not just Tines.
Credits are Tines' way of managing AI resource consumption across your organization. Credits are what you use when you run AI features in Tines. Think of credits as the currency for AI usage in your tenant.
💡Note
In Tines, understanding tokens helps you in a few ways:
First, it helps you write more effective prompts. If you're working with large amounts of data, you might need to break it into smaller chunks to stay within context window limits.
Second, it helps you understand credit usage. As a builder, you'll primarily see credits in your workflow events, but knowing that credits are based on token usage helps you understand why certain operations consume more resources than others.
💡Note
AI capabilities
Understanding what AI does well and where it struggles will help you use it effectively in your intelligent workflows.
What AI does well
AI excels at tasks that involve pattern recognition and data processing. Here are some things AI handles particularly well:
Summarize information: AI can take a lengthy document, email thread, or dataset and distill it down to the key points. This is incredibly useful when you're dealing with large volumes of information and need to quickly understand what matters.
Extract data: Need to pull email addresses from unstructured text? Or extract dates, names, or account numbers from a document? AI can identify and extract specific types of information even when the format varies.
Identify patterns: AI can spot trends, outliers, and unusual patterns in data. This makes it valuable for tasks such as analyzing security alerts, identifying suspicious activity, and detecting changes in system behavior.
Generate readable content: AI can write emails, create reports, format data into tables, and generate other types of content. This saves time and ensures consistency in your communications.
Make decisions based on clear criteria: When you provide clear rules and context, AI can make decisions about how to categorize, prioritize, or route information. For example, it can determine the severity of a security alert based on specific indicators.
Where AI has limitations
It's important to understand AI’s limitations so you can design workflows that account for them:
Can produce incorrect results: Sometimes AI generates information that sounds plausible but is actually wrong. This is often called a "hallucination." The AI isn't lying or trying to deceive you. It's simply making predictions based on patterns, and sometimes those predictions are off base. This is why human oversight is so important.
Requires clear instructions: AI performs best when you give it specific, detailed instructions. Vague or ambiguous prompts often lead to unpredictable results. The clearer you are about what you want, the better the output will be.
Works best with well-structured input: While AI can handle unstructured data, it performs better when the input is organized and formatted consistently. If you're feeding messy or inconsistent data into an AI feature, you might need to clean it up first before hitting send.
Human oversight recommended for critical decisions: AI should augment human decision-making, not replace it. For important or sensitive tasks, always have a human review the AI's output before taking action. This is especially true for decisions that affect security, compliance, or customer relationships.
Tines AI features are designed with these limitations in mind. You control when AI runs, what data it processes, and how its outputs are used. This human-in-the-loop approach ensures that you get the benefits of AI while maintaining the oversight and control you need.