Platform
Local stacks
How the infrastructure layer supports local development and production deployments with container builds and Compose definitions.
Design intent
Use this lens when implementing Local stacks across a fleet: define clear boundaries, make change snapshot-based, and keep operational signals observable.
- Local stacks should mirror production workflows, not just APIs
- Containerized dependencies reduce “works on my machine” drift
- First integration test should validate deploy + telemetry end-to-end
What it is
The infrastructure layer provides Docker Compose and container build definitions for local environments and deployments.
Design constraints
- Local stacks should mirror production workflows, not just APIs
- Containerized dependencies reduce “works on my machine” drift
- First integration test should validate deploy + telemetry end-to-end
Architecture at a glance
- Define a stable artifact boundary (what you deploy) and a stable signal boundary (what you observe)
- Treat changes as versioned, testable, rollbackable units
- Use health + telemetry gates to scale safely
Typical workflow
- Define scope and success criteria (what should change, what must stay stable)
- Create or update a snapshot, then validate against a canary environment/site
- Deploy progressively with health/telemetry gates and explicit rollback criteria
- Confirm acceptance tests and operational dashboards before expanding
System boundary
Treat Local stacks as a repeatable interface between engineering intent (design) and runtime reality (deployments + signals). Keep site-specific details configurable so the same design scales across sites.
Example artifact
Implementation notes (conceptual)
topic: Local stacks
plan: define -> snapshot -> canary -> expand
signals: health + telemetry + events tied to version
rollback: select known-good snapshotWhy it matters
- Faster onboarding for new engineers
- Reproducible environments across teams
- Clear deployment primitives for customers and partners
Engineering outcomes
- Local stacks should mirror production workflows, not just APIs
- Containerized dependencies reduce “works on my machine” drift
- First integration test should validate deploy + telemetry end-to-end
Quick acceptance checks
- Use Compose stacks to reproduce service dependencies locally
- Keep environment config minimal and documented (ports, secrets, endpoints)
Common failure modes
- Drift between desired and actual running configuration
- Changes without clear rollback criteria
- Insufficient monitoring for acceptance after rollout
Acceptance tests
- Verify the deployed snapshot/version matches intent (no drift)
- Run a canary validation: behavior, health, and telemetry align with expectations
- Verify rollback works and restores known-good behavior
In the platform
- Compose stacks for local and staging
- Container images for services and runtime components
- Build/release patterns that align with deployments
Implementation checklist
- Use Compose stacks to reproduce service dependencies locally
- Keep environment config minimal and documented (ports, secrets, endpoints)
- Mirror production-like workflows: build → version → deploy (even locally)
- Validate edge/runtime components against local backends for integration tests
Rollout guidance
- Start with a canary site that matches real conditions
- Use health + telemetry gates; stop expansion on regressions
- Keep rollback to a known-good snapshot fast and rehearsed
Acceptance tests
- Verify the deployed snapshot/version matches intent (no drift)
- Run a canary validation: behavior, health, and telemetry align with expectations
- Verify rollback works and restores known-good behavior
Deep dive
Practical next steps
How teams typically apply this in real deployments.
Key takeaways
- Local stacks should mirror production workflows, not just APIs
- Containerized dependencies reduce “works on my machine” drift
- First integration test should validate deploy + telemetry end-to-end
Checklist
- Use Compose stacks to reproduce service dependencies locally
- Keep environment config minimal and documented (ports, secrets, endpoints)
- Mirror production-like workflows: build → version → deploy (even locally)
- Validate edge/runtime components against local backends for integration tests
Next steps
Related topics
Deep dive
Common questions
Quick answers that help during commissioning and operations.
What should local dev mirror from production?
The service boundaries, orchestration flows, and artifact versioning. Exact scale isn’t needed, but the workflow should be production-like.
How do we avoid “works locally” surprises?
Use containerized dependencies and a consistent seed/config workflow. Validate end-to-end flows (deploy + telemetry) in a local stack before staging.
What is the best first integration test?
Deploy a small snapshot to a local/virtual edge target, then confirm health signals, telemetry ingestion, and UI correlation to the snapshot ID.