Let's wrap up with some practical guidance for building effective AI agents. You've already learned the fundamentals of prompt engineering in the Introduction to AI in Tines module. Now we'll focus on considerations that are specific to the AI Agent action.
Enforce consistency with output schemas
An output schema is a JSON “blueprint” that defines the expected structure of the AI's response. When you configure an output schema, the AI will format its response to match that structure, and Tines will validate the output against the schema.
Establishing an output schema for your AI Agent action provides several benefits:
Consistency: Every response follows the same structure, making it easier to build reliable workflows.
Validation: Tines automatically validates the output, so you know it matches your expectations.
Easier parsing: You can reference specific fields in downstream actions without worrying about format variations.
To get the best results, reference the output schema in your system instructions or prompt (for task mode).
🪄Tip
✋ Try this: Configure an output schema
Avoid the "do everything" agent
It may be tempting to create a single AI Agent action that handles multiple complex tasks, but this can lead to problems such as:
The AI runs out of context space due to too much information
Multiple objectives create conflicting priorities
Performance degrades as complexity increases
You consume more credits than necessary
Instead, create specialized agents for specific tasks.
💡Note
Design for tool usage patterns
When you attach tools to an AI Agent action, think about how the AI will use them together. Some patterns work better than others:
Sequential tools: If your workflow requires steps to occur in a specific order, make this clear in your system instructions. For example: "First, search for the user's account information. Once you have confirmed the correct user, disable their account. Finally, send a notification to their manager."
Conditional tools: If tools should only be used in certain situations, specify the conditions. For example: "Only create a Jira ticket if the severity is high or critical. For low and medium severity alerts, log the information but don't create a ticket."
Parallel tools: If multiple tools can be used simultaneously (e.g., searching multiple threat intelligence sources), you can let the AI decide. For example: "Check all available threat intelligence sources to gather comprehensive information about this IP address."
Learn from your conversations
The AI Agent action emits detailed events (via the Events panel) that include the full conversation history (when tools are used or in chat mode). This is valuable data for continuous improvement:
Review conversations where the AI struggled or made poor decisions.
Look for patterns in how the AI uses (or doesn't use) specific tools.
Identify common user questions or requests in chat mode that might need better handling.
Monitor credit usage patterns to identify opportunities for optimization.
Use these insights to refine your system instructions, adjust tool descriptions, or restructure your AI Agent action’s design.