Cloudflare Workers
Ship a Mandu app to Cloudflare Workers via `@mandujs/edge/workers`. `mandu build --target=workers` emits a bundled Worker + `wrangler.toml`. Bindings (KV, R2, D1) via `AsyncLocalStorage`-backed accessors.
On this page
Cloudflare Workers
Mandu's Cloudflare Workers adapter is the shipped edge path (Phase
15.1). mandu build --target=workers emits a bundled Worker plus a
ready-to-use wrangler.toml. Bindings (KV, R2, D1, queues, analytics
engines) are accessed through AsyncLocalStorage-backed accessors.
# Install
bun add @mandujs/edge
bun add -D wrangler
# Build
mandu build --target=workers
# Dev (local, against real Workers runtime)
wrangler dev
# Deploy
wrangler deploy
Emitted artifacts
.
├── .mandu/
│ └── workers/
│ ├── worker.js # bundled entry
│ └── register.js # populates registries at runtime
└── wrangler.toml # only emitted if absent — your edits survive rebuilds
Manual wiring
If you prefer to assemble the Worker entry yourself:
// worker.ts
import { createWorkersHandler } from "@mandujs/edge/workers";
import manifest from "./.mandu/routes.manifest.json";
import "./.mandu/workers/register.js"; // populates registries
const fetch = createWorkersHandler(manifest, {
cssPath: "/.mandu/client/globals.css",
});
export default { fetch };
Then point wrangler.toml at it:
main = "worker.ts"
compatibility_date = "2025-01-01"
compatibility_flags = ["nodejs_als"]
wrangler.toml — what Mandu emits
# wrangler.toml
name = "my-mandu-app" # derived from package.json name
main = ".mandu/workers/worker.js"
compatibility_date = "2025-01-01"
compatibility_flags = ["nodejs_als"]
[env.production]
name = "my-mandu-app-prod"
# Static asset handling (optional, paired with Pages)
[assets]
directory = "public"
binding = "ASSETS"
Compatibility flag — nodejs_als
AsyncLocalStorage requires Node.js compatibility in the Workers
runtime. Ensure wrangler.toml enables it:
# wrangler.toml
compatibility_flags = ["nodejs_als"]
# Or the full compat bundle (heavier):
# compatibility_flags = ["nodejs_compat"]
The emitted wrangler.toml from mandu build --target=workers
already includes this flag. If you hand-roll your config, add it
yourself — without it, getWorkersEnv() / getWorkersCtx() fall
back to a per-Request WeakMap which is isolated per-request but
won't carry ctx across waitUntil callbacks that outlive the fetch.
Without nodejs_als vs with
| Scenario | Without flag | With flag |
|---|---|---|
| Synchronous access in the fetch handler | works | works |
| Access inside an awaited promise | works | works |
Access inside ctx.waitUntil(asyncFn()) that outlives the response |
undefined | works |
| Access from scheduled events / queue consumers | undefined | works |
Always enable the flag. The emitted wrangler.toml does it for you.
Accessing bindings
Declare bindings in wrangler.toml:
[[kv_namespaces]]
binding = "SESSIONS"
id = "xxxxxxxxxxxxxxxx"
[[r2_buckets]]
binding = "UPLOADS"
bucket_name = "my-app-uploads"
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "yyyyyyyyyyyyyyyy"
[[queues.producers]]
binding = "ANALYTICS_QUEUE"
queue = "analytics"
Access them anywhere in your Mandu server code:
import { getWorkersEnv, getWorkersCtx } from "@mandujs/edge/workers";
export async function POST(req: Request) {
const env = getWorkersEnv();
const ctx = getWorkersCtx();
// KV
const session = await env!.SESSIONS.get("sid_123");
// R2
const avatar = await env!.UPLOADS.get("u/alice.png");
// D1
const row = await env!.DB
.prepare("SELECT * FROM users WHERE id = ?")
.bind(1)
.first();
// Queue — fire-and-forget without blocking the response
ctx?.waitUntil(env!.ANALYTICS_QUEUE.send({ at: Date.now() }));
return Response.json({ ok: true });
}
Concurrent requests in the same isolate never see each other's
bindings — the accessors track bindings via AsyncLocalStorage with a
per-Request WeakMap fallback.
Typed env
Declare the binding types in a module-augmenting .d.ts:
// src/types/env.d.ts
import type {
KVNamespace,
R2Bucket,
D1Database,
Queue,
} from "@cloudflare/workers-types";
declare module "@mandujs/edge/workers" {
interface WorkersEnv {
SESSIONS: KVNamespace;
UPLOADS: R2Bucket;
DB: D1Database;
ANALYTICS_QUEUE: Queue<{ at: number }>;
}
}
getWorkersEnv() now returns WorkersEnv — no more env!.X shouting
at the type checker.
Error responses
Uncaught exceptions return a 500 JSON payload:
{
"error": "InternalServerError",
"correlationId": "019291f0-4a1c-7f2e-8c9a-abc...",
"message": "Internal Server Error",
"runtime": "workers"
}
In production (NODE_ENV === "production" or
env.ENVIRONMENT === "production"), message is the generic
"Internal Server Error". In dev it includes the raw error message.
Stack traces and cause are never included in the HTTP body. The
full error is logged via console.error for Cloudflare Logpush or
wrangler tail consumption.
For unsupported Bun APIs:
{
"error": "BunApiUnsupportedOnEdge",
"api": "Bun.s3",
"correlationId": "...",
"message": "Bun.s3 is not yet polyfilled for the Workers runtime.",
"migration_guide": "/docs/edge#unsupported-apis",
"runtime": "workers"
}
Runtime limits
| Limit | Free | Paid |
|---|---|---|
| CPU time per invocation | 10 ms | 30 s |
| Memory | 128 MB | 128 MB |
| Request body size | 100 MB | 500 MB |
| Script size (gzip) | 1 MB | 10 MB |
| Subrequests per invocation | 50 | 1000 |
The free-tier 10ms CPU cap is tight for SSR with many DB roundtrips. A small Mandu page with one D1 query fits comfortably; more complex routes may need the paid plan.
Dev workflow
# One terminal: watch + rebuild on source changes
bun run build --watch
# Another terminal: serve via the real Workers runtime
wrangler dev
wrangler dev honors wrangler.toml bindings — add a local KV
namespace:
wrangler kv:namespace create SESSIONS
# → wrangler prints the dev ID; add it to wrangler.toml under [[kv_namespaces]]
Deploy
wrangler deploy
# Or pin to an environment
wrangler deploy --env production
wrangler deploy reads the main entry from wrangler.toml and
uploads the bundled script. Mandu's adapter keeps the bundle small by
only including modules transitively reached from the Worker entry.
Secrets
Set runtime secrets via wrangler, not argv:
wrangler secret put SESSION_SECRET
wrangler secret put JWT_SECRET
# List
wrangler secret list
Secrets appear on env at runtime — same accessor as bindings:
import { getWorkersEnv } from "@mandujs/edge/workers";
const env = getWorkersEnv();
const signed = await signToken(env!.JWT_SECRET, payload);
Common errors
CLI_E213: Edge-runtime compatibility warning — import of 'fs' —
a module deep in your graph imports Node's fs. Find it with
mandu build --target=workers --verbose and either:
- Swap for a Web-standard alternative (
Bun.filewon't work either on Workers yet — see polyfill map). - Guard the import with a runtime check.
getWorkersEnv() returns undefined — confirm
compatibility_flags = ["nodejs_als"] is present in wrangler.toml.
Without it, access across ctx.waitUntil() boundaries returns
undefined.
Request hangs past 30s in dev — wrangler dev honors the paid CPU
limit by default. Test against the free-tier cap locally:
wrangler dev --cpu-limit-ms 10
🤖 Agent Prompt
Apply the guidance from the Mandu docs page at https://mandujs.com/docs/edge/cloudflare-workers to my project.
Summary of the page:
Cloudflare Workers adapter: `mandu build --target=workers` emits `.mandu/workers/worker.js` + `wrangler.toml`. Bindings accessed via `getWorkersEnv()` / `getWorkersCtx()` — AsyncLocalStorage (requires `nodejs_als` compat flag) with per-Request WeakMap fallback. Uncaught 500s return structured JSON with `correlationId`, `runtime: 'workers'`.
Required invariants — must hold after your changes:
- Workers runtime is V8 isolates — 1–5ms cold start, 10ms CPU cap on free tier
- `mandu build --target=workers` emits `.mandu/workers/worker.js` (bundled) + `wrangler.toml` (only if absent)
- Bindings require `compatibility_flags = [\"nodejs_als\"]` in wrangler.toml for cross-`waitUntil` access
- Without nodejs_als, `getWorkersEnv()` / `getWorkersCtx()` fall back to per-Request WeakMap (fetch-time only)
- Uncaught 500s return JSON with `correlationId`, `runtime: 'workers'`, generic message in production
- Stack traces / `cause` are NEVER in the HTTP body — logged via `console.error` for Logpush
Then:
1. Make the change in my codebase consistent with the page.
2. Run `bun run guard` and `bun run check` to verify nothing
in src/ or app/ breaks Mandu's invariants.
3. Show me the diff and any guard violations.
Related
- Edge index — polyfill mapping overview.
- Deploy — Cloudflare Pages — the Pages+Functions variant (different deploy model, same V8 isolate).
- Architect — Prerender — static prerendering reduces the edge CPU you need.
For Agents
{
"schema": "mandu.edge.workers/v0.24",
"package_entry": "@mandujs/edge/workers",
"handler_factory": "createWorkersHandler(manifest, opts)",
"build_command": "mandu build --target=workers",
"wrangler_artifacts": [".mandu/workers/worker.js", "wrangler.toml"],
"required_compat_flags": ["nodejs_als"],
"accessors": {
"getWorkersEnv": "WorkersEnv | undefined (depends on nodejs_als)",
"getWorkersCtx": "ExecutionContext | undefined"
},
"error_response_shape": {
"error": "InternalServerError | BunApiUnsupportedOnEdge",
"correlationId": "uuid v7",
"runtime": "workers"
},
"cpu_limits_ms": { "free": 10, "paid": 30000 },
"rules": [
"ALWAYS include `compatibility_flags = ['nodejs_als']`",
"Secrets via `wrangler secret put`, NOT argv or inline",
"Augment `WorkersEnv` in a .d.ts for typed bindings",
"Stack traces NEVER in HTTP body — logged via console.error for Logpush"
]
}For Agents
Cloudflare Workers adapter: `mandu build --target=workers` emits `.mandu/workers/worker.js` + `wrangler.toml`. Bindings accessed via `getWorkersEnv()` / `getWorkersCtx()` — AsyncLocalStorage (requires `nodejs_als` compat flag) with per-Request WeakMap fallback. Uncaught 500s return structured JSON with `correlationId`, `runtime: 'workers'`.
- Workers runtime is V8 isolates — 1–5ms cold start, 10ms CPU cap on free tier
- `mandu build --target=workers` emits `.mandu/workers/worker.js` (bundled) + `wrangler.toml` (only if absent)
- Bindings require `compatibility_flags = [\"nodejs_als\"]` in wrangler.toml for cross-`waitUntil` access
- Without nodejs_als, `getWorkersEnv()` / `getWorkersCtx()` fall back to per-Request WeakMap (fetch-time only)
- Uncaught 500s return JSON with `correlationId`, `runtime: 'workers'`, generic message in production
- Stack traces / `cause` are NEVER in the HTTP body — logged via `console.error` for Logpush