Scaling Tines

The deployment tier guides are a starting point, not a ceiling. As your workload grows, use OpenTelemetry tracing to make data-driven scaling decisions based on actual usage rather than estimates. Tines supports exporting OTEL traces from both tines-app and tines-sidekiq containers — see Exporting OpenTelemetry Traces for setup.

Enabling Observability 

Set the following ENV variables on both tines-app and tines-sidekiq containers:

Key Metrics for Scaling Decisions 

Before adjusting pod counts, first optimize SIDEKIQ_CONCURRENCY based on per-container CPU and memory limits. Once concurrency is tuned, use the following metrics to decide when to add or remove tines-sidekiq workers.

When to Scale Up 

When to Scale Down 

Tracking Story-Level Performance 

For deeper investigation into which stories or actions are driving load, use these trace attributes (requires auto instrumentation):

If only one story has a large backlog, the issue is likely story-specific (e.g., a slow external API call) rather than an infrastructure scaling problem.

Was this helpful?