v1.1.5

The Reality of Cold Starts: Analyzing Fastro on Cloud Run

Cloud Run

In the world of serverless computing, Cold Start is the silent killer of user experience. When your Google Cloud Run service scales from zero, every single millisecond your container spends downloading dependencies, generating manifests, or compiling CSS is a millisecond your user spends staring at a blank screen.

With Fastro, we've engineered a deployment strategy to aggressively move all heavy lifting from the runtime environment back to the build phase. However, as any seasoned engineer knows, the real world of networks, DNS, and container orchestration introduces unavoidable overhead. Here is exactly how we optimize it, and the honest numbers behind it.

The Problem: The Hidden Cost of Runtime Setup

A typical naive Deno deployment might download remote modules or generate routing manifests as the container boots. In a serverless environment like Cloud Run, this creates a cascade of issues:

  1. Latency Spikes: The very first request can take upwards of 5-8 seconds to respond.
  2. Resource Waste: Precious CPU cycles are burned on initialization rather than serving real user traffic.
  3. Fragility: If jsr.io or deno.land is experiencing a hiccup, your application's cold start becomes even slower—or fails entirely.

The Solution: The "Triple-Cached" Architecture

Our Dockerfile relies on a highly optimized multi-stage build designed around one core philosophy: Zero Runtime Initialization.

1. Eager Pre-Building

Instead of generating the application manifest or processing PostCSS at startup, we execute these tasks purely during the build stage:

RUN deno task build

This single command generates manifest.ts, compiles Tailwind CSS into static files, and structures SEO metadata long before the image ever reaches the Google Cloud registry.

2. Deep Dependency Caching

We don't just copy source code; we snapshot the entire Deno cache ecosystem. By manipulating the DENO_DIR environment variable, we ensure all remote modules are firmly baked into the final image:

# Inside the builder stage
ENV DENO_DIR=/cache/.deno
RUN ENV=production deno cache --config deno.json main.ts ...

# Inside the runner stage
COPY --from=builder /cache/.deno /cache/.deno

When the container boot sequence begins on Cloud Run, Deno immediately finds everything it needs on the local disk. Result? Zero external network requests during startup.

3. An Ultra-Slim Runtime Image

We utilize the alpine variant of the Deno official image. By leaving behind the build tools, compiler caches, and OS bloat, we keep the final production image incredibly lightweight (~102MB). Smaller images are pulled, unpacked, and launched significantly faster by Google's infrastructure.

The Optimized Dockerfile

Here is the complete Dockerfile that powers Fastro's sub-second cold starts. It uses a dual-stage approach to separate the build environment from the lean runtime image.

# syntax=docker/dockerfile:1.4

FROM denoland/deno:alpine-2.1.9 AS builder

WORKDIR /app

ENV DENO_DIR=/build/.deno
RUN mkdir -p /build/.deno

COPY deno.json .
RUN deno install

COPY . .

RUN deno task build

ENV DENO_DIR=/cache/.deno
RUN mkdir -p /cache/.deno

RUN ENV=production deno cache --config deno.json \
      main.ts manifest.ts \
      $(find modules -type f -name '*.ts' -print | tr '\n' ' ')


FROM denoland/deno:alpine-2.1.9 AS runner

EXPOSE 8080
WORKDIR /app

ENV DENO_DIR=/cache/.deno

COPY --from=builder --chown=deno:deno /cache/.deno /cache/.deno
COPY --from=builder --chown=deno:deno /app /app

USER deno

CMD ["sh", "-c", "ENV=production deno run --unstable-kv -A main.ts ${PORT:-8080}"]

Real-World Performance: The Fastro Advantage

We continually test our architecture against the live production environment at fastro.dev. Here are our latest cold start metrics on Google Cloud Run:

Metric Measured Value What It Means
Image Size ~102 MB Deno Alpine runtime + App
Cold Start (Total via Proxy) ~8.6 s First ping to response (includes Cloudflare, DNS, SSL)
Cold Start (Direct URL) ~7.5 s Bypassing proxies, raw Google Cloud frontend latency
Server Boot Time ~53 ms Fastro Internal initialization (registering 11 modules)
Warm Start (Direct) ~0.5 s Standard end-to-end response time when active

Tip

Understanding the true bottleneck: Server-side logs from Google Cloud show that the infrastructure eats the majority of the time (~7.5s) Pulling/Unpacking the container and allocating hardware. However, once the container triggers, Fastro becomes ready to serve in just 53 milliseconds. By pre-caching our dependencies, we eliminate the variable network fetch time which would otherwise double this latency.

Practical Engineering Over Marketing

What does this level of optimization mean for everyday applications powered by Fastro?

Build Once, Deploy Confidently

By leveraging a deeply optimized multi-stage Dockerfile that aggressively pre-caches both code and external dependencies, we've established a solid baseline for serverless deployments.

Fastro is built on engineering honesty: we can't eliminate the physics of cloud infrastructure scaling, but we absolutely ensure your application framework isn't the bottleneck.


This analysis is based on the Fastro v1.1.x architectural series running in production. For an interactive look at our performance capabilities, see our Benchmarks or explore the code directly at fastro.dev.