How this site is built.

dynamiccloud.info is a demonstration architecture — but it serves a real, live, public-facing application. Every visitor request is real, every Lambda invocation is real, every CloudFront log is being processed by a real EC2 worker as you read this. The system isn't a sandbox or a fixture; it's a working production-grade AWS deployment built on patterns that mirror what most teams run for their own customer-facing services.

We built it this way deliberately. dynamiccloud.info is the primary data source for LightPane's pane demonstrations — and a contrived, tidy environment would only ever produce contrived, tidy panes. Real architectures have CloudFront logs landing in unexpected formats, replication latency that varies by region, backups running at 03:30 UTC and Athena queries kicking off ad-hoc — and panes only earn their keep when they handle that reality cleanly.

The five things LightPane shows you

The architecture is organised, intentionally, around the five principles that almost every cloud provider's well-architected framework names — and that almost every operations team cares about. Each diagram below highlights which of these the underlying components contribute to.

Observability

What's running, what changed, what's about to break. CloudWatch alarms, log search, daily briefings.

Availability

Multi-region replication, backup posture, status checks, region maps. Knowing the system survives a region outage before one happens.

Security monitoring

Unrotated keys, public S3, IAM blast radius, GuardDuty findings, Config drift. Catching the thing that bites you next quarter, this quarter.

Performance

Right-sizing recommendations, Lambda cold-start radar, load-balancer health, slow-query surfacing. Performance you can act on, not just chart.

Cost insights

Spend by tag, idle resources, savings-plan coverage, free-tier headroom. The number on the bill explained, not just reported.

What follows is the architecture in three layers — the public-facing path a visitor's browser takes, the analytics event loop running behind the scenes, and the multi-region replication that gives the system real geographic resilience.

Diagram 1 of 3

Public request flow

Observability Performance Security monitoring

What happens when you visit dynamiccloud.info or click a lab button. Route 53 hands the request to CloudFront, which runs a small viewer-request function (canonical-host redirect plus directory-style URL rewrite) and decides whether the path is static content or a lab API call. Static paths stream from the S3 origin bucket; lab paths route through API Gateway HTTP API to one of six purpose-built Lambdas, with the guestbook lab writing to a DynamoDB table. WAF (us-east-1, CloudFront-scoped) rate-limits and pattern-matches at the edge. Embedded LightPane panes load asynchronously from lightpane.io and authenticate with a public read-only access key locked to this origin.

Public request flow architecture for dynamiccloud.info — Route 53 to CloudFront with viewer-request function and WAF, fanning out to S3 static-site origin and API Gateway HTTP API to six lab Lambdas backed by a DynamoDB guestbook, with embedded LightPane panes loading from lightpane.io
Every component above feeds one or more LightPane panes — CloudFront access logs flow into the analytics loop (diagram 2), Lambda metrics drive the lambda-radar pane, DynamoDB capacity drives the dynamodb pane, WAF rule hits drive the guardduty / security overview pane.
Diagram 2 of 3

Traffic analytics event loop

Observability Performance Cost insights

The workload behind the site-stats page. Every CloudFront access log delivery (every minute or so) fires an S3 ObjectCreated event, lands on an SQS queue, and is picked up by a small EC2 worker (a t4g.nano in a private subnet, reaching S3 over a VPC gateway endpoint). The worker normalises CloudFront's v2 JSON-Lines format, runs GoAccess over the cumulative log set, and publishes a fresh HTML report back to the origin bucket — invalidating the CloudFront edge cache so the new report appears within seconds. A DynamoDB idempotency table prevents double-counting; an SQS DLQ catches anything that fails five times; an EventBridge rollup at 03:00 UTC re-runs the last 7 days as a safety net. Side paths: a Glue+Athena workgroup over the same logs (for ad-hoc queries), an SNS topic publishing daily summaries, AWS Backup snapshotting the idempotency table, and a CloudWatch dashboard surfacing queue depth, parse time, and worker host metrics.

Traffic analytics event loop architecture — CloudFront access log deliveries trigger S3 ObjectCreated events to an SQS queue consumed by an EC2 t4g.nano worker in a private subnet, which uses a VPC gateway endpoint to read S3, runs GoAccess to produce HTML reports, writes back to the origin bucket and issues CloudFront invalidations, with a DynamoDB idempotency table, SQS DLQ for failures, EventBridge 03:00 UTC rollup, Glue+Athena ad-hoc query path, SNS daily summary topic, AWS Backup snapshots and a CloudWatch dashboard
This single loop touches roughly 18 distinct AWS services. Most LightPane panes need exactly this kind of "real workload, real metrics" data to be meaningful — fixture data produces fixture insights.
Diagram 3 of 3

Multi-region resilience

Availability Security monitoring Cost insights

Two complementary cross-region patterns running on real, measured replication. The static-site S3 bucket replicates one-way from London (eu-west-2) to Tokyo (ap-northeast-1) — a "global-content fan-out" pattern that mirrors how most CDNs and content platforms keep latency low for distant readers. The DynamoDB processed-logs table is a Global Table across London, Stockholm (eu-north-1), and Frankfurt (eu-central-1) — a "European HA" pattern that mirrors a typical disaster-recovery posture for regulated workloads with data-residency requirements. Writes from any region propagate to the others within a few seconds (last measured: ~5s for DynamoDB, ~30s for S3 cold-start, sub-second steady state).

Multi-region resilience architecture — one-way S3 cross-region replication of the static-site bucket from London (eu-west-2) to Tokyo (ap-northeast-1) for global-content fan-out, alongside a DynamoDB Global Table for the processed-logs table across London, Stockholm (eu-north-1) and Frankfurt (eu-central-1) for European high availability, with measured propagation latencies of ~5s for DynamoDB and ~30s S3 cold-start / sub-second steady-state
The dots on this map are the same dots the LightPane region-map pane draws — the architecture and the demonstration share their geographic story.

Next: see all of this through LightPane's lens

The diagrams above describe the system from an architect's point of view — components, flows, regions. The Demonstration Dashboard shows the same system from an operator's point of view — organised by the five principles above, each pane reading live from the components in these diagrams.

That dashboard is how a customer would actually use LightPane on their own production environment: not to admire the architecture, but to spot the kms key that hasn't rotated in 400 days, the Lambda that's been throttled since Thursday, the $30 / month NAT-gateway charge nobody noticed.

Open the Demonstration Dashboard →