Try Live
Add Docs
Rankings
Pricing
Docs
Install
Theme
Install
Docs
Pricing
More...
More...
Try Live
Rankings
Enterprise
Create API Key
Add Docs
Cloudflare Workers
https://github.com/cloudflare/workers-sdk
Admin
The Cloudflare Workers SDK provides tools and libraries for deploying serverless code globally,
...
Tokens:
76,986
Snippets:
701
Trust Score:
9.3
Update:
1 month ago
Context
Skills
Chat
Benchmark
67.3
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Cloudflare Workers SDK The Cloudflare Workers SDK is a comprehensive monorepo providing tools for building, testing, and deploying serverless applications on Cloudflare's global network. The SDK includes Wrangler (the primary CLI tool for Workers development), Miniflare (a local simulator powered by workerd), create-cloudflare (C3 - project scaffolding), and various supporting packages for testing, asset handling, and Vite integration. The Workers platform enables developers to deploy JavaScript, TypeScript, and Python code that runs on Cloudflare's edge network, with access to bindings like KV storage, R2 object storage, D1 databases, Durable Objects, Queues, and Workflows. The SDK provides a complete development workflow from project creation through local development, testing, and production deployment. ## Wrangler CLI - Create a New Project Wrangler is the command-line interface for building Cloudflare Workers. Use `create cloudflare` to scaffold a new project with interactive prompts for project type and configuration. ```bash # Create a new Workers project with interactive setup npm create cloudflare@latest # Create a project with a specific name npm create cloudflare@latest my-worker # Initialize a basic TypeScript Worker npx wrangler init my-worker -y # Project structure created: # my-worker/ # ├── src/ # │ └── index.ts # ├── wrangler.jsonc # ├── package.json # └── tsconfig.json ``` ## Wrangler Configuration File Wrangler uses `wrangler.jsonc`, `wrangler.json`, or `wrangler.toml` for project configuration. The configuration defines the Worker name, entry point, compatibility settings, and bindings to Cloudflare services. ```jsonc // wrangler.jsonc - Basic Worker configuration { "$schema": "node_modules/wrangler/config-schema.json", "name": "my-worker", "main": "src/index.ts", "compatibility_date": "2024-01-01", // KV Namespace binding "kv_namespaces": [ { "binding": "MY_KV", "id": "your-kv-namespace-id" } ], // R2 Bucket binding "r2_buckets": [ { "binding": "MY_BUCKET", "bucket_name": "my-bucket" } ], // D1 Database binding "d1_databases": [ { "binding": "DB", "database_name": "my-database", "database_id": "your-database-id" } ], // Durable Objects binding "durable_objects": { "bindings": [ { "name": "MY_DO", "class_name": "MyDurableObject" } ] }, // Queue configuration "queues": { "producers": [ { "binding": "MY_QUEUE", "queue": "my-queue" } ], "consumers": [ { "queue": "my-queue", "max_batch_size": 10 } ] } } ``` ## Wrangler CLI - Development and Deployment Wrangler provides commands for local development with live reloading and deployment to Cloudflare's network. The `dev` command starts a local server powered by workerd for accurate runtime behavior. ```bash # Start local development server with live reload wrangler dev # Start dev server on specific port wrangler dev --port 3000 # Deploy Worker to Cloudflare wrangler deploy # Deploy with a specific name wrangler deploy --name my-production-worker # View deployment logs in real-time wrangler tail # List all deployed Workers wrangler deployments list # Manage secrets wrangler secret put MY_SECRET wrangler secret list wrangler secret delete MY_SECRET # Manage KV namespaces wrangler kv namespace create "MY_KV" wrangler kv key put --binding MY_KV "key" "value" wrangler kv key get --binding MY_KV "key" # Manage R2 buckets wrangler r2 bucket create my-bucket wrangler r2 object put my-bucket/path/to/file --file ./local-file.txt # Manage D1 databases wrangler d1 create my-database wrangler d1 execute my-database --file ./schema.sql wrangler d1 execute my-database --command "SELECT * FROM users" ``` ## Worker Fetch Handler - Basic HTTP Handler The fetch handler is the primary entry point for HTTP requests. It receives a Request object and returns a Response, with access to environment bindings and execution context. ```typescript // src/index.ts - Basic Worker with fetch handler export interface Env { MY_KV: KVNamespace; MY_BUCKET: R2Bucket; DB: D1Database; API_KEY: string; // Secret binding } export default { async fetch( request: Request, env: Env, ctx: ExecutionContext ): Promise<Response> { const url = new URL(request.url); // Route handling if (url.pathname === "/api/data") { // Access KV storage const value = await env.MY_KV.get("my-key"); return Response.json({ data: value }); } if (url.pathname === "/api/upload" && request.method === "POST") { // Upload to R2 const file = await request.arrayBuffer(); await env.MY_BUCKET.put("uploads/file.bin", file); return new Response("Uploaded", { status: 201 }); } if (url.pathname === "/api/users") { // Query D1 database const { results } = await env.DB.prepare( "SELECT * FROM users WHERE active = ?" ).bind(true).all(); return Response.json(results); } // Use waitUntil for background tasks ctx.waitUntil(logRequest(request)); return new Response("Hello World!"); } }; async function logRequest(request: Request) { // Background logging - doesn't block response console.log(`Request: ${request.method} ${request.url}`); } ``` ## KV Storage - Key-Value Store Operations Workers KV provides low-latency key-value storage at the edge. It's optimized for read-heavy workloads with eventual consistency for writes. ```typescript // KV operations in a Worker export interface Env { KV_NAMESPACE: KVNamespace; } export default { async fetch(request: Request, env: Env): Promise<Response> { const url = new URL(request.url); const key = url.pathname.slice(1); // Remove leading slash if (request.method === "GET") { // Get value with different return types const textValue = await env.KV_NAMESPACE.get(key); // string | null const jsonValue = await env.KV_NAMESPACE.get(key, "json"); // object | null const streamValue = await env.KV_NAMESPACE.get(key, "stream"); // ReadableStream | null const bufferValue = await env.KV_NAMESPACE.get(key, "arrayBuffer"); // ArrayBuffer | null // Get with metadata const { value, metadata } = await env.KV_NAMESPACE.getWithMetadata(key); if (value === null) { return new Response("Not Found", { status: 404 }); } return new Response(value); } if (request.method === "PUT") { const body = await request.text(); // Put with options await env.KV_NAMESPACE.put(key, body, { expiration: Math.floor(Date.now() / 1000) + 3600, // Expire in 1 hour expirationTtl: 86400, // Or use TTL in seconds metadata: { contentType: "text/plain", uploadedAt: Date.now() } }); return new Response("Stored", { status: 201 }); } if (request.method === "DELETE") { await env.KV_NAMESPACE.delete(key); return new Response("Deleted", { status: 204 }); } if (request.method === "LIST") { // List keys with pagination const list = await env.KV_NAMESPACE.list({ prefix: "users:", limit: 100, cursor: url.searchParams.get("cursor") || undefined }); return Response.json({ keys: list.keys, cursor: list.cursor, complete: list.list_complete }); } return new Response("Method Not Allowed", { status: 405 }); } }; ``` ## R2 Object Storage - S3-Compatible Storage R2 provides zero-egress-fee object storage with an S3-compatible API. It supports multipart uploads, conditional operations, and HTTP metadata. ```typescript // R2 operations in a Worker export interface Env { R2_BUCKET: R2Bucket; } export default { async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> { const url = new URL(request.url); const key = url.pathname.slice(1); if (request.method === "GET") { // Get object with conditional headers const object = await env.R2_BUCKET.get(key, { onlyIf: { etagMatches: request.headers.get("If-Match") || undefined, etagDoesNotMatch: request.headers.get("If-None-Match") || undefined, }, range: request.headers.get("Range") ? { offset: 0, length: 1000 } // Parse Range header as needed : undefined }); if (object === null) { return new Response("Not Found", { status: 404 }); } // Build response with HTTP metadata const headers = new Headers(); object.writeHttpMetadata(headers); headers.set("etag", object.httpEtag); headers.set("Content-Length", object.size.toString()); return new Response(object.body, { headers }); } if (request.method === "PUT") { // Upload with metadata const object = await env.R2_BUCKET.put(key, request.body, { httpMetadata: { contentType: request.headers.get("Content-Type") || "application/octet-stream", cacheControl: "public, max-age=86400", }, customMetadata: { uploadedBy: "worker", timestamp: Date.now().toString() }, // Conditional put onlyIf: { etagDoesNotMatch: "*" // Only if doesn't exist } }); return Response.json({ key: object.key, size: object.size, etag: object.httpEtag }, { status: 201 }); } if (request.method === "DELETE") { await env.R2_BUCKET.delete(key); // Or delete multiple objects // await env.R2_BUCKET.delete(["key1", "key2", "key3"]); return new Response(null, { status: 204 }); } if (url.pathname === "/list") { // List objects with pagination const listed = await env.R2_BUCKET.list({ prefix: url.searchParams.get("prefix") || undefined, delimiter: "/", // For directory-like listing limit: 100, cursor: url.searchParams.get("cursor") || undefined }); return Response.json({ objects: listed.objects.map(obj => ({ key: obj.key, size: obj.size, uploaded: obj.uploaded })), truncated: listed.truncated, cursor: listed.cursor, delimitedPrefixes: listed.delimitedPrefixes }); } return new Response("Method Not Allowed", { status: 405 }); } }; ``` ## D1 Database - SQLite at the Edge D1 is Cloudflare's serverless SQL database built on SQLite. It provides familiar SQL with automatic replication and zero infrastructure management. ```typescript // D1 database operations export interface Env { DB: D1Database; } export default { async fetch(request: Request, env: Env): Promise<Response> { const url = new URL(request.url); // Query single row if (url.pathname === "/user") { const id = url.searchParams.get("id"); const user = await env.DB.prepare( "SELECT * FROM users WHERE id = ?" ).bind(id).first(); if (!user) { return Response.json({ error: "Not found" }, { status: 404 }); } return Response.json(user); } // Query multiple rows if (url.pathname === "/users") { const { results, meta } = await env.DB.prepare( "SELECT id, name, email FROM users WHERE active = ? ORDER BY name LIMIT ?" ).bind(true, 50).all(); return Response.json({ users: results, duration: meta.duration, rowsRead: meta.rows_read }); } // Insert data if (url.pathname === "/users" && request.method === "POST") { const { name, email } = await request.json(); const result = await env.DB.prepare( "INSERT INTO users (name, email, active, created_at) VALUES (?, ?, ?, datetime('now'))" ).bind(name, email, true).run(); return Response.json({ success: result.success, insertedId: result.meta.last_row_id }, { status: 201 }); } // Batch operations (transaction-like) if (url.pathname === "/batch" && request.method === "POST") { const { users } = await request.json(); const statements = users.map((user: any) => env.DB.prepare( "INSERT INTO users (name, email, active) VALUES (?, ?, ?)" ).bind(user.name, user.email, true) ); // Execute all statements in a batch const results = await env.DB.batch(statements); return Response.json({ inserted: results.length, results: results.map(r => r.meta.last_row_id) }); } // Raw SQL execution (use with caution) if (url.pathname === "/exec" && request.method === "POST") { const sql = await request.text(); const result = await env.DB.exec(sql); return Response.json(result); } return new Response("Not Found", { status: 404 }); } }; // D1 Schema example (schema.sql): // CREATE TABLE IF NOT EXISTS users ( // id INTEGER PRIMARY KEY AUTOINCREMENT, // name TEXT NOT NULL, // email TEXT UNIQUE NOT NULL, // active BOOLEAN DEFAULT true, // created_at TEXT DEFAULT (datetime('now')) // ); // CREATE INDEX IF NOT EXISTS idx_users_email ON users(email); ``` ## Durable Objects - Stateful Serverless Durable Objects provide strongly consistent, stateful compute with transactional storage. Each object has a unique ID and maintains state across requests. ```typescript // Durable Object implementation import { DurableObject } from "cloudflare:workers"; export interface Env { COUNTER: DurableObjectNamespace; } // Durable Object class export class Counter extends DurableObject { private count: number = 0; constructor(ctx: DurableObjectState, env: Env) { super(ctx, env); } async fetch(request: Request): Promise<Response> { const url = new URL(request.url); if (url.pathname === "/increment") { // Transactional storage this.count = (await this.ctx.storage.get<number>("count")) || 0; this.count++; await this.ctx.storage.put("count", this.count); return Response.json({ count: this.count }); } if (url.pathname === "/decrement") { this.count = (await this.ctx.storage.get<number>("count")) || 0; this.count--; await this.ctx.storage.put("count", this.count); return Response.json({ count: this.count }); } if (url.pathname === "/get") { this.count = (await this.ctx.storage.get<number>("count")) || 0; return Response.json({ count: this.count }); } return new Response("Not Found", { status: 404 }); } // Alarm handler for scheduled work async alarm(): Promise<void> { // Perform scheduled work console.log("Alarm triggered!"); await this.ctx.storage.delete("alarm-scheduled"); } } // Worker using Durable Objects export default { async fetch(request: Request, env: Env): Promise<Response> { const url = new URL(request.url); const counterId = url.searchParams.get("id") || "default"; // Get Durable Object by ID const id = env.COUNTER.idFromName(counterId); const counter = env.COUNTER.get(id); // Forward request to Durable Object return counter.fetch(request); } }; ``` ## Queues - Asynchronous Message Processing Cloudflare Queues enable reliable asynchronous message processing with guaranteed delivery, batching, and retry support. ```typescript // Queue producer and consumer interface QueueMessage { key: string; value: string; timestamp: number; } export interface Env { QUEUE_PRODUCER: Queue<QueueMessage>; RESULTS: KVNamespace; } export default { // HTTP handler - produce messages async fetch(request: Request, env: Env): Promise<Response> { if (request.method === "POST") { const data = await request.json() as { key: string; value: string }; // Send single message await env.QUEUE_PRODUCER.send({ key: data.key, value: data.value, timestamp: Date.now() }); // Or send batch of messages // await env.QUEUE_PRODUCER.sendBatch([ // { body: { key: "key1", value: "value1", timestamp: Date.now() } }, // { body: { key: "key2", value: "value2", timestamp: Date.now() } } // ]); return new Response("Message queued", { status: 202 }); } return new Response("Method Not Allowed", { status: 405 }); }, // Queue consumer handler async queue( batch: MessageBatch<QueueMessage>, env: Env, ctx: ExecutionContext ): Promise<void> { for (const message of batch.messages) { try { // Process message const result = message.body.value.toUpperCase(); await env.RESULTS.put(message.body.key, result); // Acknowledge successful processing message.ack(); } catch (error) { // Retry message (will be redelivered) message.retry({ delaySeconds: 10 // Optional delay before retry }); } } // Or acknowledge/retry all at once // batch.ackAll(); // batch.retryAll(); } } satisfies ExportedHandler<Env, QueueMessage>; ``` ## Workflows - Durable Execution Workflows enable long-running, durable computations that survive failures and can wait for external events. Perfect for multi-step processes and orchestration. ```typescript // Workflow definition import { WorkerEntrypoint, WorkflowEntrypoint, WorkflowEvent, WorkflowStep } from "cloudflare:workers"; import { NonRetryableError } from "cloudflare:workflows"; interface OrderParams { orderId: string; customerId: string; items: { productId: string; quantity: number }[]; } export class OrderWorkflow extends WorkflowEntrypoint<Env, OrderParams> { async run(event: WorkflowEvent<OrderParams>, step: WorkflowStep) { const { payload } = event; // Step 1: Validate order const validation = await step.do("validate-order", async () => { // Validation logic if (payload.items.length === 0) { throw new NonRetryableError("Order has no items"); } return { valid: true, total: payload.items.length }; }); // Step 2: Process payment (with retry configuration) const payment = await step.do( "process-payment", { retries: { limit: 3, delay: 1000, backoff: "exponential" } }, async () => { // Payment processing return { transactionId: crypto.randomUUID(), status: "completed" }; } ); // Step 3: Wait for a condition or event await step.sleep("wait-for-processing", "30 seconds"); // Step 4: Wait for external event (e.g., shipping confirmation) const shipmentEvent = await step.waitForEvent("await-shipment", { type: "shipment-confirmed", timeout: "24 hours" }); // Step 5: Send notification await step.do("send-notification", async () => { // Send email/notification return { notified: true }; }); return { orderId: payload.orderId, status: "completed", transactionId: payment.transactionId }; } } // Worker to manage workflows interface Env { ORDER_WORKFLOW: Workflow<OrderParams>; } export default class extends WorkerEntrypoint<Env> { async fetch(request: Request): Promise<Response> { const url = new URL(request.url); if (url.pathname === "/orders" && request.method === "POST") { const params = await request.json() as OrderParams; // Create new workflow instance const instance = await this.env.ORDER_WORKFLOW.create({ id: `order-${params.orderId}`, params }); return Response.json({ workflowId: instance.id }); } if (url.pathname === "/orders/status") { const id = url.searchParams.get("id")!; const instance = await this.env.ORDER_WORKFLOW.get(id); const status = await instance.status(); return Response.json(status); } if (url.pathname === "/orders/event" && request.method === "POST") { const id = url.searchParams.get("id")!; const event = await request.json(); const instance = await this.env.ORDER_WORKFLOW.get(id); await instance.sendEvent(event); return Response.json({ sent: true }); } return new Response("Not Found", { status: 404 }); } } ``` ## Scheduled Events - Cron Triggers Scheduled handlers allow Workers to run on a cron schedule without incoming HTTP requests. Perfect for periodic tasks like cleanup, reporting, or data synchronization. ```typescript // Scheduled event handler export interface Env { DB: D1Database; KV: KVNamespace; } export default { // HTTP handler async fetch(request: Request, env: Env): Promise<Response> { return new Response("Worker is running"); }, // Scheduled handler - triggered by cron async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext ): Promise<void> { const { scheduledTime, cron } = controller; console.log(`Cron triggered: ${cron} at ${new Date(scheduledTime).toISOString()}`); // Perform scheduled task switch (cron) { case "0 0 * * *": // Daily at midnight await cleanupOldRecords(env); break; case "*/5 * * * *": // Every 5 minutes await updateMetrics(env); break; case "0 * * * *": // Every hour await generateReport(env); break; } } } satisfies ExportedHandler<Env>; async function cleanupOldRecords(env: Env) { await env.DB.prepare( "DELETE FROM logs WHERE created_at < datetime('now', '-30 days')" ).run(); } async function updateMetrics(env: Env) { const count = await env.DB.prepare("SELECT COUNT(*) as count FROM users").first(); await env.KV.put("metrics:user_count", JSON.stringify(count)); } async function generateReport(env: Env) { // Generate and store report } ``` ## Miniflare - Local Development Simulator Miniflare is a local simulator for Cloudflare Workers powered by workerd. It provides accurate local development with full access to Workers APIs and bindings. ```typescript // Using Miniflare programmatically for testing import { Miniflare, Log, LogLevel } from "miniflare"; // Create Miniflare instance const mf = new Miniflare({ script: ` export default { async fetch(request, env) { const value = await env.KV.get("key"); return new Response(value || "not found"); } } `, modules: true, kvNamespaces: ["KV"], kvPersist: "./data/kv", // Persist KV data to disk // D1 database d1Databases: ["DB"], d1Persist: "./data/d1", // R2 bucket r2Buckets: ["BUCKET"], r2Persist: "./data/r2", // Durable Objects durableObjects: { COUNTER: "Counter" }, durableObjectsPersist: "./data/do", // Logging log: new Log(LogLevel.DEBUG), // Server configuration port: 8787, host: "127.0.0.1" }); // Wait for server to start const url = await mf.ready; console.log(`Listening on ${url}`); // Send requests const response = await mf.dispatchFetch("http://localhost/api/data"); console.log(await response.text()); // Access bindings directly const kv = await mf.getKVNamespace("KV"); await kv.put("key", "value"); const value = await kv.get("key"); const db = await mf.getD1Database("DB"); await db.exec("CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)"); const bucket = await mf.getR2Bucket("BUCKET"); await bucket.put("file.txt", "Hello, R2!"); // Cleanup await mf.dispose(); ``` ## Vitest Pool Workers - Testing in Workers Runtime The @cloudflare/vitest-pool-workers package enables running Vitest tests inside the Workers runtime for accurate testing of Workers code with full access to runtime APIs. ```typescript // vitest.config.ts import { defineWorkersProject } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersProject({ test: { poolOptions: { workers: { singleWorker: true, miniflare: { compatibilityFlags: ["service_binding_extra_handlers"], kvNamespaces: ["KV"], r2Buckets: ["BUCKET"] }, wrangler: { configPath: "./wrangler.jsonc" } } } } }); // test/worker.test.ts import { SELF, env } from "cloudflare:test"; import { describe, it, expect, beforeAll } from "vitest"; import worker from "../src/index"; describe("Worker Tests", () => { // Integration test using SELF it("handles fetch requests", async () => { const response = await SELF.fetch("http://example.com/api/data"); expect(response.status).toBe(200); const data = await response.json(); expect(data).toHaveProperty("success", true); }); // Unit test calling handler directly it("unit test fetch handler", async () => { const request = new Request("http://example.com/"); const response = await worker.fetch(request, env, { waitUntil: () => {}, passThroughOnException: () => {} }); expect(await response.text()).toBe("Hello World!"); }); // Test with bindings it("uses KV storage", async () => { // Access KV directly in tests await env.KV.put("test-key", "test-value"); const response = await SELF.fetch("http://example.com/get?key=test-key"); expect(await response.text()).toBe("test-value"); }); // Test scheduled handler it("handles scheduled events", async () => { const result = await SELF.scheduled({ scheduledTime: Date.now(), cron: "* * * * *" }); expect(result.outcome).toBe("ok"); }); // Test queue handler it("processes queue messages", async () => { const result = await SELF.queue("test-queue", [ { id: "1", body: { data: "message1" } }, { id: "2", body: { data: "message2" } } ]); expect(result.outcome).toBe("ok"); expect(result.ackAll).toBe(true); }); }); ``` ## Vite Plugin - Full-Stack Development The @cloudflare/vite-plugin provides seamless Vite integration for developing Workers with hot module replacement and full access to Workers APIs. ```typescript // vite.config.ts import { defineConfig } from "vite"; import { cloudflare } from "@cloudflare/vite-plugin"; export default defineConfig({ plugins: [ cloudflare({ // Use wrangler.jsonc configuration configPath: "./wrangler.jsonc", // Or configure inline // worker: { // name: "my-worker", // main: "./src/worker.ts" // } }) ], build: { // Output directory for static assets outDir: "dist" } }); // src/worker.ts - Worker with assets export interface Env { ASSETS: Fetcher; DB: D1Database; } export default { async fetch(request: Request, env: Env): Promise<Response> { const url = new URL(request.url); // API routes if (url.pathname.startsWith("/api/")) { if (url.pathname === "/api/users") { const { results } = await env.DB.prepare("SELECT * FROM users").all(); return Response.json(results); } return Response.json({ error: "Not found" }, { status: 404 }); } // Serve static assets for all other routes return env.ASSETS.fetch(request); } }; ``` ## KV Asset Handler - Static Site Serving The @cloudflare/kv-asset-handler package provides utilities for serving static assets from KV storage with caching, SPA support, and custom routing. ```typescript // Worker serving static assets from KV import { getAssetFromKV, mapRequestToAsset, serveSinglePageApp, NotFoundError, MethodNotAllowedError } from "@cloudflare/kv-asset-handler"; import manifestJSON from "__STATIC_CONTENT_MANIFEST"; const assetManifest = JSON.parse(manifestJSON); export interface Env { __STATIC_CONTENT: KVNamespace; } export default { async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> { try { // Basic asset serving return await getAssetFromKV( { request, waitUntil(promise) { return ctx.waitUntil(promise); } }, { ASSET_NAMESPACE: env.__STATIC_CONTENT, ASSET_MANIFEST: assetManifest, // Cache configuration cacheControl: { browserTTL: 60 * 60 * 24, // 1 day browser cache edgeTTL: 60 * 60 * 24 * 7, // 1 week edge cache bypassCache: false }, // For SPAs - serve index.html for all routes mapRequestToAsset: serveSinglePageApp, // Or custom routing // mapRequestToAsset: (request) => { // const url = new URL(request.url); // if (url.pathname.startsWith("/docs")) { // url.pathname = url.pathname.replace("/docs", ""); // } // return mapRequestToAsset(new Request(url.toString(), request)); // } } ); } catch (e) { if (e instanceof NotFoundError) { return new Response("Page not found", { status: 404 }); } else if (e instanceof MethodNotAllowedError) { return new Response("Method not allowed", { status: 405 }); } return new Response("Internal error", { status: 500 }); } } }; ``` ## Service Bindings - Worker-to-Worker Communication Service bindings enable secure, zero-latency communication between Workers without going through the public internet. ```typescript // Worker A - API service export default { async fetch(request: Request): Promise<Response> { const url = new URL(request.url); if (url.pathname === "/api/process") { const data = await request.json(); return Response.json({ processed: true, data }); } return new Response("Not Found", { status: 404 }); } }; // Worker B - Consumer using service binding export interface Env { API_SERVICE: Fetcher; // Service binding to Worker A } export default { async fetch(request: Request, env: Env): Promise<Response> { // Call the bound service const response = await env.API_SERVICE.fetch( new Request("http://internal/api/process", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ key: "value" }) }) ); const result = await response.json(); return Response.json({ fromService: result }); } }; ``` ```jsonc // wrangler.jsonc for Worker B { "name": "worker-b", "main": "src/index.ts", "compatibility_date": "2024-01-01", "services": [ { "binding": "API_SERVICE", "service": "worker-a" } ] } ``` ## Workers with Assets - Static Asset Serving Workers with Assets allows serving static files alongside your Worker code. The asset worker handles static files while your Worker handles dynamic routes. ```typescript // Worker with integrated static assets export interface Env { ASSETS: Fetcher; // Binding to asset worker } export default { async fetch(request: Request, env: Env): Promise<Response> { const url = new URL(request.url); // API routes handled by Worker if (url.pathname.startsWith("/api/")) { return handleApiRequest(request); } // Static assets handled by asset binding // Falls through to asset worker for HTML, CSS, JS, images, etc. return env.ASSETS.fetch(request); } }; async function handleApiRequest(request: Request): Promise<Response> { const url = new URL(request.url); if (url.pathname === "/api/hello") { return Response.json({ message: "Hello from API!" }); } return Response.json({ error: "Not found" }, { status: 404 }); } ``` ```jsonc // wrangler.jsonc with assets configuration { "name": "my-worker-with-assets", "main": "src/index.ts", "compatibility_date": "2024-01-01", "assets": { "directory": "./public", "binding": "ASSETS", "html_handling": "auto-trailing-slash", "not_found_handling": "single-page-application" } } ``` The Cloudflare Workers SDK provides a complete development ecosystem for building serverless applications at the edge. The primary use cases include API backends with KV/R2/D1 storage, full-stack applications with static asset serving, real-time applications using Durable Objects, background job processing with Queues and Workflows, and scheduled tasks with cron triggers. Integration patterns typically involve combining multiple bindings (KV for caching, D1 for relational data, R2 for files) within a single Worker, using service bindings for microservice architectures, and leveraging Durable Objects for stateful coordination. The SDK's testing tools (Vitest pool workers, Miniflare) enable comprehensive testing from unit to integration levels, while the Vite plugin provides modern frontend tooling integration for full-stack development workflows.