Skip to main content

Step 6 — Iterate and Version

Pro, Team, and Enterprise plans.

Every simulation run is saved automatically to Execution History. The Version Workflow Viewer stores the exact architecture snapshot associated with each run.

Using Execution History effectively

Each history entry records: run status, timestamp, duration, peak RPS, and estimated monthly cost. Use this to:

  • Compare cost across runs to measure the financial impact of each architectural change
  • Select any historical run to view the exact canvas state at that point in time
  • Roll back to a previous version if a change introduces a regression

Follow this loop until the architecture meets your targets:

Design → Configure → Simulate → Review Metrics → Apply Recommendations → Re-simulate → Compare History → Repeat

A typical optimisation cycle for a new API architecture requires three to five iterations before reaching a stable, well-optimised state. Each iteration should have a clearly defined objective — for example, "reduce Lambda latency under Spike load" or "reduce monthly cost below $10K at 1K RPS."

Version control best practices

  • Treat each simulation run as a named checkpoint. Before making significant canvas changes, note the run number in your documentation so you can reference the exact baseline.
  • Compare versions side by side before deployment. The Version Workflow Viewer shows service count and connection count per snapshot — a quick sanity check that the intended changes are present and nothing unexpected has been added.
  • One change at a time. When multiple recommendations are available, apply them one at a time and re-simulate between each. Batch-applying all recommendations obscures which changes had the most impact, making future debugging harder.