When AI agent actions use tools that return large outputs, the data is automatically truncated to fit within the model's context. This helped ensure reliable performance when models struggled with large context windows. With recent improvements to frontier AI models, large context handling is significantly improved, so now you have controls to decide for yourself:
In AI agent actions — a new "Tool output truncation" toggle in the action configuration lets you disable truncation entirely. It's enabled by default, preserving existing behavior.
In Workbench — when a tool output exceeds the token limit, the conversation pauses and asks whether you'd like to truncate or use the full output. This works in both the web UI and Slack.