Appearance
Cloudflare Workers Research
Research Focus: Architecture and patterns for migrating MonoTask's task automation and agent orchestration system to Cloudflare Workers platform.
Table of Contents
- Workers Architecture Overview
- Durable Objects for Queue Management
- WebSocket Handling for Real-time Updates
- KV Storage for Session Management
- Worker-to-Worker Communication
- Environment Variables and Secrets
- Scheduling and Cron Triggers
- Cloudflare Queues
- Workflows for Durable Execution
- Error Handling and Logging
- Performance Limits and Quotas
- Cold Start Mitigation Strategies
- Code Examples and Patterns
Workers Architecture Overview
V8 Isolate Model
Cloudflare Workers run on V8 isolates rather than containers, providing:
- Faster startup: Cold starts in milliseconds vs seconds for containers
- Lower overhead: Multiple Workers share a single V8 process
- Better density: Higher concurrent execution per machine
- Memory isolation: Secure execution boundaries without container overhead
Runtime Environment
Workers support multiple languages and frameworks:
- Languages: JavaScript, TypeScript, Python, Rust, WebAssembly
- Frameworks: React, Vue, Next.js, Astro, Hono, React Router
- APIs: Standard Web APIs (fetch, Request, Response, WebSocket)
- No Node.js APIs: Cannot use XMLHttpRequest, filesystem APIs, or Node-specific modules
Request/Response Lifecycle
typescript
export interface Env {
// Bindings: KV, D1, R2, Durable Objects, etc.
MY_KV: KVNamespace;
MY_DURABLE_OBJECT: DurableObjectNamespace;
}
export default {
async fetch(
request: Request,
env: Env,
ctx: ExecutionContext
): Promise<Response> {
// Process request
const url = new URL(request.url);
// Access bindings via env
await env.MY_KV.put("key", "value");
// Extend execution beyond response with ctx.waitUntil()
ctx.waitUntil(logAnalytics(request));
return new Response("Hello World");
}
};Key Parameters:
request: Standard Request object (method, headers, body, URL)env: Bindings to resources (KV, databases, Durable Objects, secrets)ctx: Execution context withwaitUntil()andpassThroughOnException()
Handler Types
Workers support multiple event handler types:
- Fetch Handler: HTTP request/response (most common)
- Scheduled Handler: Cron-triggered execution
- Queue Handler: Process messages from Cloudflare Queues
- Tail Handler: Receive logs and diagnostics
- Email Handler: Process incoming emails
- Alarm Handler: Durable Object scheduled callbacks
Durable Objects for Queue Management
Architecture and Consistency Guarantees
Durable Objects provide:
- Global uniqueness: Each object has a unique ID accessible worldwide
- Co-located storage: Persistent SQLite storage lives with the compute
- Strong consistency: Transactional, serializable storage with no race conditions
- Single-threaded execution: One request at a time per object instance
- Geographic distribution: Objects provision near request origins
Key characteristics:
- Each Durable Object has exactly one active instance globally at any time
- All requests to a specific object ID route to the same instance
- Perfect for coordination, queues, WebSocket session management
Defining a Durable Object
typescript
import { DurableObject } from "cloudflare:workers";
export class TaskQueue extends DurableObject {
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
}
async fetch(request: Request): Promise<Response> {
// Handle incoming requests
const url = new URL(request.url);
if (url.pathname === "/enqueue") {
return await this.enqueueTask(request);
}
return new Response("Not Found", { status: 404 });
}
async enqueueTask(request: Request): Promise<Response> {
const task = await request.json();
// Store in persistent storage
const queue = await this.ctx.storage.get<Task[]>("queue") || [];
queue.push({ ...task, id: crypto.randomUUID() });
await this.ctx.storage.put("queue", queue);
// Set alarm to process queue
const alarmTime = Date.now() + 1000; // 1 second
await this.ctx.storage.setAlarm(alarmTime);
return Response.json({ success: true });
}
async alarm() {
// Process queued tasks
const queue = await this.ctx.storage.get<Task[]>("queue") || [];
if (queue.length > 0) {
const task = queue.shift();
await this.processTask(task);
await this.ctx.storage.put("queue", queue);
// Schedule next alarm if more tasks
if (queue.length > 0) {
await this.ctx.storage.setAlarm(Date.now() + 1000);
}
}
}
}Storage API
The Durable Object Storage API provides strongly consistent, transactional storage:
typescript
interface DurableObjectStorage {
// Basic operations
get<T>(key: string): Promise<T | undefined>;
get<T>(keys: string[]): Promise<Map<string, T>>;
put<T>(key: string, value: T): Promise<void>;
put(entries: Record<string, any>): Promise<void>;
delete(key: string): Promise<boolean>;
delete(keys: string[]): Promise<number>;
list<T>(options?: {
start?: string;
end?: string;
prefix?: string;
reverse?: boolean;
limit?: number;
}): Promise<Map<string, T>>;
deleteAll(): Promise<void>;
// Transactions
transaction<T>(closure: (txn: DurableObjectTransaction) => Promise<T>): Promise<T>;
// SQL (SQLite-backed storage)
sql: DurableObjectSql;
// Alarms
getAlarm(): Promise<number | null>;
setAlarm(scheduledTime: number): Promise<void>;
deleteAlarm(): Promise<void>;
}Key features:
- Automatic caching: Storage API includes built-in in-memory caching
- Atomic operations: Transactions ensure consistency
- SQLite backend: Full SQL support for complex queries
- 10 GB per object: Ample storage for queue state
In-Memory State vs Persistent Storage
In-Memory State:
typescript
export class Counter extends DurableObject {
private count: number = 0; // In-memory, lost on eviction
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
// Load from storage once during initialization
ctx.blockConcurrencyWhile(async () => {
const stored = await this.ctx.storage.get<number>("count");
this.count = stored || 0;
});
}
async increment(): Promise<number> {
this.count++;
// Persist to storage
await this.ctx.storage.put("count", this.count);
return this.count;
}
}When to use in-memory state:
- Frequently accessed values (avoid repeated storage reads)
- Temporary session data
- Performance-critical hot paths
When to use persistent storage:
- Critical data that must survive eviction
- Infrequently accessed data
- Large datasets
Best practices:
- Use
blockConcurrencyWhile()in constructor to load initial state - Store state in instance variables (
this.value), not globals - Storage API has built-in caching, so repeated
get()calls are fast - Combine both: cache frequently accessed values in-memory
Alarms API for Scheduling
typescript
export class ScheduledProcessor extends DurableObject {
async fetch(request: Request): Promise<Response> {
// Check for existing alarm
const currentAlarm = await this.ctx.storage.getAlarm();
if (currentAlarm === null) {
// Schedule alarm for 10 seconds from now
await this.ctx.storage.setAlarm(Date.now() + 10000);
}
return Response.json({
alarmScheduled: currentAlarm !== null,
nextAlarm: currentAlarm
});
}
async alarm(alarmInfo?: { retryCount: number; isRetry: boolean }) {
if (alarmInfo?.isRetry) {
console.log(`Retry attempt ${alarmInfo.retryCount}`);
}
// Process scheduled work
await this.processBatchedTasks();
// Optionally schedule next alarm
await this.ctx.storage.setAlarm(Date.now() + 60000); // 1 minute
}
async processBatchedTasks() {
const tasks = await this.ctx.storage.get<Task[]>("pending") || [];
for (const task of tasks) {
await this.executeTask(task);
}
await this.ctx.storage.delete("pending");
}
}Alarm characteristics:
- At-least-once execution: Guaranteed delivery with automatic retries
- One alarm per object: Only a single scheduled alarm at a time
- Exponential backoff: 2-second initial delay, up to 6 retries
- Non-blocking: Only one
alarm()executes at a time
Use cases:
- Batch processing to reduce costs
- Delayed job execution
- Distributed queue implementation
- Work coordination without incoming requests
Concurrency Control
typescript
export class CriticalSection extends DurableObject {
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
// Block all requests during initialization (max 30 seconds)
ctx.blockConcurrencyWhile(async () => {
await this.loadComplexState();
});
}
async loadComplexState() {
// Complex initialization that must complete before handling requests
const data = await this.ctx.storage.get("state");
// Process data...
}
}blockConcurrencyWhile(callback):
- Executes async callback while blocking other events
- 30-second timeout enforced
- Commonly used during constructor initialization
- Ensures state consistency before request handling
Accessing Durable Objects from Workers
typescript
export interface Env {
TASK_QUEUE: DurableObjectNamespace<TaskQueue>;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
// Get Durable Object by name (deterministic ID)
const id = env.TASK_QUEUE.idFromName("main-queue");
// Or generate new unique ID
// const id = env.TASK_QUEUE.newUniqueId();
// Or use existing ID
// const id = env.TASK_QUEUE.idFromString(idString);
// Get stub (reference to Durable Object)
const stub = env.TASK_QUEUE.get(id);
// Call methods via RPC
const result = await stub.enqueueTask({ name: "process" });
// Or forward entire request
return await stub.fetch(request);
}
};ID generation strategies:
idFromName(name): Deterministic ID from string (same name = same object)newUniqueId(): Random unique ID for new objectsidFromString(hexId): Reconstruct from serialized IDidFromId(id): Clone existing ID
Location hints:
typescript
const id = env.TASK_QUEUE.idFromName("queue", {
locationHint: "enam" // Eastern North America
});Available regions: wnam, enam, sam, weur, eeur, apac, oc, afr, me
Queue Implementation Pattern
typescript
export class DistributedQueue extends DurableObject {
private processing: boolean = false;
async enqueue(task: Task): Promise<void> {
const queue = await this.ctx.storage.get<Task[]>("queue") || [];
queue.push(task);
await this.ctx.storage.put("queue", queue);
// Trigger processing if not already running
if (!this.processing) {
await this.ctx.storage.setAlarm(Date.now());
}
}
async dequeue(): Promise<Task | null> {
const queue = await this.ctx.storage.get<Task[]>("queue") || [];
if (queue.length === 0) return null;
const task = queue.shift();
await this.ctx.storage.put("queue", queue);
return task;
}
async alarm() {
this.processing = true;
try {
let task: Task | null;
while ((task = await this.dequeue()) !== null) {
await this.processTask(task);
}
} finally {
this.processing = false;
}
}
async processTask(task: Task): Promise<void> {
// Execute task logic
console.log("Processing task:", task.id);
}
}WebSocket Handling for Real-time Updates
WebSocketPair API
typescript
export default {
async fetch(request: Request): Promise<Response> {
const upgradeHeader = request.headers.get("Upgrade");
if (upgradeHeader !== "websocket") {
return new Response("Expected WebSocket", { status: 426 });
}
// Create WebSocket pair (two connected endpoints)
const pair = new WebSocketPair();
const [client, server] = [pair[0], pair[1]];
// Accept the WebSocket on Cloudflare's network
server.accept();
// Handle messages
server.addEventListener("message", (event) => {
console.log("Received:", event.data);
server.send(`Echo: ${event.data}`);
});
server.addEventListener("close", (event) => {
console.log("WebSocket closed:", event.code, event.reason);
});
server.addEventListener("error", (event) => {
console.error("WebSocket error:", event);
});
// Return client side to browser
return new Response(null, {
status: 101,
webSocket: client,
});
}
};Key methods:
accept(): Start handling WebSocket on Cloudflare's networksend(message): Send string or binary data (max 1 MiB)close(code, reason): Terminate connectionaddEventListener(type, callback): Listen formessage,close,error
Integration with Durable Objects
typescript
export class WebSocketRoom extends DurableObject {
private sessions: Map<string, WebSocket> = new Map();
async fetch(request: Request): Promise<Response> {
const upgradeHeader = request.headers.get("Upgrade");
if (upgradeHeader !== "websocket") {
return new Response("Expected WebSocket", { status: 426 });
}
const pair = new WebSocketPair();
const [client, server] = [pair[0], pair[1]];
// Generate unique session ID
const sessionId = crypto.randomUUID();
// Accept and store connection
server.accept();
this.sessions.set(sessionId, server);
// Handle messages
server.addEventListener("message", (event) => {
this.broadcast(event.data, sessionId);
});
server.addEventListener("close", () => {
this.sessions.delete(sessionId);
this.broadcast(`User left (${this.sessions.size} online)`, sessionId);
});
// Send welcome message
server.send(JSON.stringify({
type: "welcome",
sessionId,
userCount: this.sessions.size
}));
// Notify others
this.broadcast(`User joined (${this.sessions.size} online)`, sessionId);
return new Response(null, {
status: 101,
webSocket: client,
});
}
broadcast(message: string, excludeSession?: string) {
const payload = JSON.stringify({
type: "message",
data: message,
timestamp: Date.now()
});
for (const [sessionId, socket] of this.sessions) {
if (sessionId !== excludeSession) {
try {
socket.send(payload);
} catch (err) {
// Remove failed connections
this.sessions.delete(sessionId);
}
}
}
}
}
// Worker that routes to Durable Object
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const roomId = env.WEBSOCKET_ROOM.idFromName("main-room");
const room = env.WEBSOCKET_ROOM.get(roomId);
return room.fetch(request);
}
};WebSocket Hibernation API
Problem: Active WebSocket connections keep Durable Objects in memory, incurring duration charges even when idle.
Solution: WebSocket Hibernation API allows connections to persist while the Durable Object is evicted from memory.
typescript
export class HibernatableRoom extends DurableObject {
async fetch(request: Request): Promise<Response> {
const pair = new WebSocketPair();
const [client, server] = [pair[0], pair[1]];
// Accept WebSocket with Hibernation API
this.ctx.acceptWebSocket(server, ["chat-room"]);
return new Response(null, {
status: 101,
webSocket: client,
});
}
// Called when WebSocket message arrives (wakes from hibernation)
async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer) {
const data = typeof message === "string" ? message : new TextDecoder().decode(message);
// Broadcast to all WebSockets with tag
const sockets = this.ctx.getWebSockets("chat-room");
for (const socket of sockets) {
socket.send(data);
}
}
// Called when WebSocket closes
async webSocketClose(ws: WebSocket, code: number, reason: string, wasClean: boolean) {
console.log("WebSocket closed:", code, reason);
}
// Called on WebSocket error
async webSocketError(ws: WebSocket, error: any) {
console.error("WebSocket error:", error);
}
}Hibernation API methods:
ctx.acceptWebSocket(ws, tags?): Register WebSocket for hibernationctx.getWebSockets(tag?): Retrieve active WebSockets by tagctx.setWebSocketAutoResponse(pair): Auto-respond to pings (prevent wake-ups)ctx.getTags(ws): Get tags for a WebSocketctx.setHibernatableWebSocketEventTimeout(ms): Set max event runtime (0-604,800,000 ms)
Benefits:
- Reduced duration charges for idle connections
- Scales to millions of connections
- Automatic wake-up on message receipt
- Tag-based connection grouping
Broadcasting Patterns
Echo to sender:
typescript
server.addEventListener("message", (event) => {
server.send(event.data);
});Broadcast to all except sender:
typescript
broadcast(message: string, excludeSession: string) {
for (const [sessionId, socket] of this.sessions) {
if (sessionId !== excludeSession) {
socket.send(message);
}
}
}Broadcast to all including sender:
typescript
broadcast(message: string) {
for (const socket of this.sessions.values()) {
socket.send(message);
}
}Tag-based broadcasting (Hibernation API):
typescript
// Send to specific tag
const adminSockets = this.ctx.getWebSockets("admin");
for (const socket of adminSockets) {
socket.send("Admin message");
}KV Storage for Session Management
API Overview
Workers KV is a global, eventually consistent, key-value store optimized for high-read workloads with low latency.
Consistency model:
- Eventually consistent: Writes propagate globally within seconds
- Read-heavy optimized: Excellent for caching, sessions, configuration
- Not suitable for: Strong consistency requirements, write-heavy workloads
KV API Methods
typescript
interface KVNamespace {
// Read operations
get(key: string, options?: KVGetOptions): Promise<string | null>;
get(key: string, type: "text"): Promise<string | null>;
get(key: string, type: "json"): Promise<any>;
get(key: string, type: "arrayBuffer"): Promise<ArrayBuffer | null>;
get(key: string, type: "stream"): Promise<ReadableStream | null>;
// Write operations
put(
key: string,
value: string | ArrayBuffer | ReadableStream,
options?: KVPutOptions
): Promise<void>;
// Delete operations
delete(key: string): Promise<void>;
// List operations
list(options?: KVListOptions): Promise<KVListResult>;
// Metadata
getWithMetadata<Metadata>(
key: string,
options?: KVGetOptions
): Promise<{ value: string | null; metadata: Metadata | null }>;
}
interface KVPutOptions {
expiration?: number; // Unix timestamp (seconds)
expirationTtl?: number; // Seconds from now
metadata?: any; // Custom metadata (max 1024 bytes)
}
interface KVListOptions {
prefix?: string;
limit?: number;
cursor?: string;
}Session Management Pattern
typescript
export interface Env {
SESSIONS: KVNamespace;
}
interface SessionData {
userId: string;
email: string;
createdAt: number;
lastActivity: number;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// Get session ID from cookie
const sessionId = getSessionCookie(request);
if (!sessionId) {
return new Response("Unauthorized", { status: 401 });
}
// Retrieve session from KV
const session = await env.SESSIONS.get<SessionData>(
`session:${sessionId}`,
"json"
);
if (!session) {
return new Response("Session expired", { status: 401 });
}
// Update last activity and extend TTL
session.lastActivity = Date.now();
await env.SESSIONS.put(
`session:${sessionId}`,
JSON.stringify(session),
{ expirationTtl: 3600 } // 1 hour
);
return Response.json({ user: session.userId });
}
};
function getSessionCookie(request: Request): string | null {
const cookie = request.headers.get("Cookie");
if (!cookie) return null;
const match = cookie.match(/session=([^;]+)/);
return match ? match[1] : null;
}Caching Pattern
typescript
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const cacheKey = `cache:${url.pathname}`;
// Try cache first
const cached = await env.CACHE_KV.get(cacheKey, "json");
if (cached) {
return Response.json(cached, {
headers: { "X-Cache": "HIT" }
});
}
// Cache miss - fetch from origin
const data = await fetchFromAPI(url.pathname);
// Store in KV with 5 minute TTL
await env.CACHE_KV.put(
cacheKey,
JSON.stringify(data),
{ expirationTtl: 300 }
);
return Response.json(data, {
headers: { "X-Cache": "MISS" }
});
}
};Metadata Pattern
typescript
// Store value with metadata
await env.FILES.put(
"document.pdf",
pdfContent,
{
metadata: {
uploadedBy: "[email protected]",
uploadDate: Date.now(),
fileSize: pdfContent.byteLength,
contentType: "application/pdf"
},
expirationTtl: 86400 // 24 hours
}
);
// Retrieve with metadata
const { value, metadata } = await env.FILES.getWithMetadata<FileMetadata>(
"document.pdf",
"arrayBuffer"
);
if (value && metadata) {
console.log(`File uploaded by ${metadata.uploadedBy}`);
}List Keys Pattern
typescript
// List all keys with prefix
async function listUserSessions(env: Env, userId: string): Promise<string[]> {
const sessions: string[] = [];
let cursor: string | undefined;
do {
const result = await env.SESSIONS.list({
prefix: `session:${userId}:`,
limit: 1000,
cursor
});
sessions.push(...result.keys.map(k => k.name));
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
return sessions;
}TTL and Expiration
typescript
// Expire at specific timestamp
await env.KV.put("key", "value", {
expiration: Math.floor(Date.now() / 1000) + 3600 // 1 hour from now
});
// Expire after duration
await env.KV.put("key", "value", {
expirationTtl: 3600 // 1 hour
});Best Practices
- Use for read-heavy workloads: KV is optimized for high read volume
- Namespace with prefixes: Organize keys with prefixes (
session:,cache:,config:) - Set appropriate TTLs: Automatic expiration reduces manual cleanup
- Leverage metadata: Store small amounts of contextual data
- Handle eventual consistency: Don't rely on immediate read-after-write consistency
- Batch operations: Use
list()with pagination for bulk operations - Keep values small: Optimize for quick retrieval (<25 KB ideal)
Worker-to-Worker Communication
Service Bindings Overview
Service bindings enable Workers to call other Workers directly without HTTP round-trips or public URLs.
Benefits:
- Zero latency overhead: Workers run on the same thread by default
- No additional cost: Inter-Worker calls don't count as billable requests
- Type safety: Full TypeScript support with RPC
- Smart Placement: Optional geographic optimization
RPC vs HTTP Communication
RPC (Recommended):
- Type-safe method calls
- Direct function invocation
- Complex object passing
- IDE autocomplete support
HTTP:
- Traditional REST patterns
- Standard request/response
- Framework compatibility
- External service compatibility
RPC Implementation
Service Worker (provides functionality):
typescript
import { WorkerEntrypoint } from "cloudflare:workers";
export interface Env {
DATABASE: D1Database;
}
// Export RPC methods via WorkerEntrypoint
export class TaskService extends WorkerEntrypoint<Env> {
async getTask(taskId: string): Promise<Task | null> {
const result = await this.env.DATABASE
.prepare("SELECT * FROM tasks WHERE id = ?")
.bind(taskId)
.first<Task>();
return result;
}
async createTask(data: CreateTaskData): Promise<Task> {
const id = crypto.randomUUID();
await this.env.DATABASE
.prepare("INSERT INTO tasks (id, title, status) VALUES (?, ?, ?)")
.bind(id, data.title, "pending")
.run();
return { id, ...data, status: "pending" };
}
async updateTaskStatus(taskId: string, status: TaskStatus): Promise<void> {
await this.env.DATABASE
.prepare("UPDATE tasks SET status = ? WHERE id = ?")
.bind(status, taskId)
.run();
}
}
export default TaskService;Consumer Worker (calls service):
typescript
export interface Env {
TASK_SERVICE: Service<TaskService>; // Type-safe binding
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/tasks") {
// Direct RPC method call with type safety
const task = await env.TASK_SERVICE.getTask("task-123");
return Response.json(task);
}
if (url.pathname === "/tasks/create") {
const data = await request.json<CreateTaskData>();
// Type-safe method call
const newTask = await env.TASK_SERVICE.createTask(data);
return Response.json(newTask, { status: 201 });
}
return new Response("Not Found", { status: 404 });
}
};Configuration in wrangler.toml:
toml
# Consumer Worker
name = "api-worker"
[[services]]
binding = "TASK_SERVICE"
service = "task-service-worker"
entrypoint = "TaskService" # Specify the WorkerEntrypoint class
# Service Worker
# (in separate wrangler.toml for task-service-worker)
name = "task-service-worker"HTTP Service Binding
typescript
export interface Env {
AUTH_SERVICE: Fetcher; // HTTP-based service binding
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
// Forward request to another Worker
const authResponse = await env.AUTH_SERVICE.fetch(
new Request("http://auth/verify", {
method: "POST",
headers: { "Authorization": request.headers.get("Authorization") || "" }
})
);
if (!authResponse.ok) {
return new Response("Unauthorized", { status: 401 });
}
const user = await authResponse.json();
return Response.json({ user });
}
};Context Passing with ctx.props
typescript
// Frontend Worker
export interface Env {
DOC_WORKER: Service<DocumentWorker>;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
// Pass context to service worker
ctx.props = {
requestId: crypto.randomUUID(),
userId: "user-123"
};
return env.DOC_WORKER.fetch(request);
}
};
// Document Worker
export class DocumentWorker extends WorkerEntrypoint<Env> {
async fetch(request: Request): Promise<Response> {
// Access props passed from caller
const { requestId, userId } = this.ctx.props as {
requestId: string;
userId: string;
};
console.log(`Processing request ${requestId} for user ${userId}`);
return Response.json({ success: true });
}
}TypeScript Type Safety
typescript
// types.ts (shared types)
export interface TaskService {
getTask(id: string): Promise<Task | null>;
createTask(data: CreateTaskData): Promise<Task>;
updateTaskStatus(id: string, status: TaskStatus): Promise<void>;
listTasks(filters: TaskFilters): Promise<Task[]>;
}
export interface Task {
id: string;
title: string;
status: TaskStatus;
createdAt: number;
}
// service-worker.ts
import { WorkerEntrypoint } from "cloudflare:workers";
import type { TaskService as ITaskService } from "./types";
export class TaskService extends WorkerEntrypoint<Env> implements ITaskService {
async getTask(id: string): Promise<Task | null> {
// Implementation with full type checking
}
// ... other methods
}
// consumer-worker.ts
import type { TaskService } from "./service-worker";
export interface Env {
TASK_SERVICE: Service<TaskService>;
}
// Now env.TASK_SERVICE has full TypeScript autocomplete!Performance Characteristics
- Same-thread execution: By default, service bindings run on the same isolate
- No serialization: Objects passed directly without JSON encoding
- No network latency: Direct function calls within the runtime
- Smart Placement: Optional geographic optimization for multi-region deployments
Environment Variables and Secrets
Configuration via env Parameter
All configuration is accessed through the env parameter in handler functions:
typescript
export interface Env {
// Environment variables (plain text)
ENVIRONMENT: string;
API_BASE_URL: string;
// Secrets (encrypted)
ANTHROPIC_API_KEY: string;
DATABASE_URL: string;
GITHUB_CLIENT_SECRET: string;
// Bindings
KV: KVNamespace;
DB: D1Database;
QUEUE: Queue;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
// Access environment variables
const apiUrl = env.API_BASE_URL;
// Access secrets (same API, but encrypted at rest)
const apiKey = env.ANTHROPIC_API_KEY;
// Use in API calls
const response = await fetch(`${apiUrl}/endpoint`, {
headers: {
"Authorization": `Bearer ${apiKey}`
}
});
return response;
}
};wrangler.toml Configuration
toml
name = "my-worker"
main = "src/index.ts"
compatibility_date = "2024-01-01"
# Environment variables (committed to version control)
[vars]
ENVIRONMENT = "production"
API_BASE_URL = "https://api.example.com"
LOG_LEVEL = "info"
# KV Namespace bindings
[[kv_namespaces]]
binding = "KV"
id = "your-kv-namespace-id"
# Durable Object bindings
[[durable_objects.bindings]]
name = "TASK_QUEUE"
class_name = "TaskQueue"
# Service bindings
[[services]]
binding = "AUTH_SERVICE"
service = "auth-worker"
# Queue bindings
[[queues.producers]]
binding = "TASK_QUEUE"
queue = "task-processing-queue"
[[queues.consumers]]
queue = "task-processing-queue"
max_batch_size = 10
max_batch_timeout = 30Secrets Management
Setting secrets via Wrangler CLI:
bash
# Set a secret (encrypted, not in wrangler.toml)
wrangler secret put ANTHROPIC_API_KEY
# You'll be prompted to enter the value
# The secret is encrypted and stored separately from your code
# List secrets (values not shown)
wrangler secret list
# Delete a secret
wrangler secret delete ANTHROPIC_API_KEYAccessing secrets in code:
typescript
export interface Env {
ANTHROPIC_API_KEY: string;
GITHUB_TOKEN: string;
ENCRYPTION_KEY: string;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
// Secrets accessed identically to environment variables
const apiKey = env.ANTHROPIC_API_KEY;
// Never log secrets!
console.log("API key loaded"); // Good
// console.log(apiKey); // BAD - don't log secrets
return Response.json({ success: true });
}
};Per-environment configuration:
toml
# Development environment
[env.development]
vars = { ENVIRONMENT = "development", API_BASE_URL = "http://localhost:3000" }
# Staging environment
[env.staging]
vars = { ENVIRONMENT = "staging", API_BASE_URL = "https://staging.example.com" }
# Production environment
[env.production]
vars = { ENVIRONMENT = "production", API_BASE_URL = "https://api.example.com" }Deploy to specific environment:
bash
wrangler deploy --env productionBest Practices
- Never commit secrets: Use
wrangler secretCLI, notwrangler.toml - Type your Env interface: Provides autocomplete and type checking
- Use per-environment configs: Separate dev/staging/production settings
- Validate required vars: Check for missing configuration on startup
- Don't log secrets: Avoid accidentally exposing sensitive data
- Rotate secrets regularly: Update API keys and tokens periodically
Scheduling and Cron Triggers
Scheduled Handler Implementation
typescript
export interface Env {
TASK_QUEUE: Queue;
KV: KVNamespace;
}
export default {
// HTTP handler
async fetch(request: Request, env: Env): Promise<Response> {
return Response.json({ status: "ok" });
},
// Scheduled handler (cron trigger)
async scheduled(
controller: ScheduledController,
env: Env,
ctx: ExecutionContext
): Promise<void> {
// Access cron pattern
console.log("Cron:", controller.cron);
// Scheduled at timestamp
console.log("Scheduled at:", controller.scheduledTime);
// Perform maintenance tasks
ctx.waitUntil(performMaintenance(env));
}
};
async function performMaintenance(env: Env): Promise<void> {
// Clean up expired sessions
await cleanupExpiredSessions(env.KV);
// Queue background tasks
await env.TASK_QUEUE.send({
type: "daily-report",
timestamp: Date.now()
});
console.log("Maintenance completed");
}Cron Configuration
toml
name = "scheduled-worker"
main = "src/index.ts"
# Multiple cron triggers supported
[triggers]
crons = [
"0 */6 * * *", # Every 6 hours
"0 0 * * *", # Daily at midnight UTC
"0 0 * * 1", # Weekly on Monday at midnight
"*/15 * * * *" # Every 15 minutes
]Cron syntax:
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday)
│ │ │ │ │
* * * * *Common patterns:
0 0 * * *- Daily at midnight UTC0 */4 * * *- Every 4 hours*/30 * * * *- Every 30 minutes0 9 * * 1-5- Weekdays at 9 AM UTC0 0 1 * *- First day of each month
Integration with Queues
typescript
export default {
async scheduled(
controller: ScheduledController,
env: Env,
ctx: ExecutionContext
): Promise<void> {
// Enqueue tasks for asynchronous processing
const tasks = await generateDailyTasks();
for (const task of tasks) {
await env.TASK_QUEUE.send(task);
}
console.log(`Enqueued ${tasks.length} tasks`);
}
};Integration with Workflows
typescript
export default {
async scheduled(
controller: ScheduledController,
env: Env,
ctx: ExecutionContext
): Promise<void> {
// Trigger long-running workflow
const workflow = await env.WORKFLOW.create({
id: `daily-${Date.now()}`,
params: { date: new Date().toISOString() }
});
console.log("Workflow started:", workflow.id);
}
};Local Testing
bash
# Start dev server with scheduled event testing
wrangler dev --test-scheduled
# Trigger scheduled handler manually
curl "http://localhost:8787/__scheduled?cron=0+0+*+*+*"Use Cases
- Data aggregation: Periodic rollups and analytics
- Cleanup tasks: Delete expired data, prune logs
- Report generation: Daily/weekly automated reports
- External API polling: Check for updates from third-party services
- Background maintenance: Database optimization, cache warming
- Reminder notifications: Scheduled alerts and notifications
- Batch processing: Trigger data processing pipelines
Cloudflare Queues
Cloudflare Queues provide guaranteed message delivery with automatic retries, ideal for asynchronous task processing.
Producer API
typescript
export interface Env {
TASK_QUEUE: Queue;
}
// Send individual message
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const task = await request.json();
// Send to queue
await env.TASK_QUEUE.send(task, {
contentType: "json", // "text" | "bytes" | "json" | "v8"
delaySeconds: 10 // Delay delivery (0-43200 seconds)
});
return Response.json({ queued: true });
}
};
// Send batch
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const tasks = await request.json<Task[]>();
// Send up to 100 messages, max 256 KB total
await env.TASK_QUEUE.sendBatch(
tasks.map(task => ({
body: task,
contentType: "json" as const,
delaySeconds: 0
}))
);
return Response.json({ queued: tasks.length });
}
};Consumer API
typescript
export interface Env {
// No binding needed for consumer
}
export default {
async queue(
batch: MessageBatch<Task>,
env: Env,
ctx: ExecutionContext
): Promise<void> {
console.log(`Processing batch of ${batch.messages.length} messages`);
for (const message of batch.messages) {
try {
await processTask(message.body);
// Explicit acknowledgment (optional, auto-acks on success)
message.ack();
} catch (error) {
console.error(`Task ${message.id} failed:`, error);
// Retry with delay
message.retry({
delaySeconds: Math.pow(2, message.attempts) // Exponential backoff
});
}
}
}
};
async function processTask(task: Task): Promise<void> {
console.log("Processing:", task.id);
// Task logic here
}Message Structure
typescript
interface Message<Body = unknown> {
readonly id: string; // Unique message ID
readonly timestamp: Date; // When message was sent
readonly body: Body; // Message payload
readonly attempts: number; // Delivery attempt count
ack(): void; // Acknowledge successful processing
retry(options?: { // Retry with delay
delaySeconds?: number;
}): void;
}
interface MessageBatch<Body = unknown> {
readonly queue: string; // Queue name
readonly messages: Message<Body>[];
ackAll(): void; // Acknowledge all messages
retryAll(options?: { // Retry all messages
delaySeconds?: number;
}): void;
}Configuration
toml
name = "queue-producer"
# Producer binding
[[queues.producers]]
binding = "TASK_QUEUE"
queue = "task-processing-queue"
# Consumer configuration
[[queues.consumers]]
queue = "task-processing-queue"
max_batch_size = 10 # Messages per batch (1-100)
max_batch_timeout = 30 # Seconds to wait for batch (0-30)
max_retries = 3 # Retry attempts before DLQ
max_concurrency = 1 # Concurrent batches (1-10)
dead_letter_queue = "failed-tasks" # Optional DLQRetry Pattern with Exponential Backoff
typescript
export default {
async queue(batch: MessageBatch<Task>, env: Env): Promise<void> {
for (const message of batch.messages) {
try {
await processTaskWithTimeout(message.body, 30000);
message.ack();
} catch (error) {
const isRetryable = error instanceof NetworkError;
if (isRetryable && message.attempts < 5) {
// Exponential backoff: 2s, 4s, 8s, 16s, 32s
const delaySeconds = Math.min(Math.pow(2, message.attempts), 300);
console.log(`Retrying in ${delaySeconds}s (attempt ${message.attempts + 1})`);
message.retry({ delaySeconds });
} else {
// Give up, send to DLQ
console.error(`Task ${message.id} permanently failed after ${message.attempts} attempts`);
message.ack(); // Acknowledge to prevent further retries
// Optionally log to monitoring system
await logFailedTask(env, message);
}
}
}
}
};Dead Letter Queue
toml
# Main queue consumer
[[queues.consumers]]
queue = "tasks"
max_retries = 3
dead_letter_queue = "failed-tasks"
# DLQ consumer for failed messages
[[queues.consumers]]
queue = "failed-tasks"
max_batch_size = 1typescript
// DLQ consumer for manual intervention
export default {
async queue(batch: MessageBatch<FailedTask>, env: Env): Promise<void> {
for (const message of batch.messages) {
// Log to monitoring system
await logToSentry({
message: "Task permanently failed",
task: message.body,
attempts: message.attempts
});
// Store for manual review
await env.FAILED_TASKS_KV.put(
message.id,
JSON.stringify({
task: message.body,
attempts: message.attempts,
timestamp: message.timestamp
})
);
message.ack();
}
}
};Limits
- Message size: 128 KB per message
- Batch size: 100 messages or 256 KB total
- Delay: 0-43,200 seconds (12 hours)
- Retention: Messages retained until processed or max retries exceeded
- Throughput: High throughput with automatic scaling
Queues vs Durable Objects
Use Queues when:
- Need guaranteed delivery and retries
- Processing can be delayed (asynchronous)
- Batch processing is efficient
- Order doesn't matter (messages processed concurrently)
Use Durable Objects when:
- Need strong consistency and coordination
- Real-time processing required
- Order matters (single-threaded execution)
- Complex state management needed
Workflows for Durable Execution
Workflows enable building multi-step, long-running processes with automatic state persistence and retries.
Key Differences from Workers
| Feature | Workers | Workflows |
|---|---|---|
| Duration | Request-scoped (CPU time limits) | Days or weeks |
| State | Ephemeral | Automatically persisted |
| Retries | Manual | Automatic |
| Use case | HTTP requests, short tasks | Multi-step processes |
Workflow Definition
typescript
import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent } from "cloudflare:workers";
export interface Env {
TASK_WORKFLOW: Workflow;
}
interface TaskWorkflowParams {
taskId: string;
userId: string;
}
export class TaskWorkflow extends WorkflowEntrypoint<Env, TaskWorkflowParams> {
async run(event: WorkflowEvent<TaskWorkflowParams>, step: WorkflowStep) {
const { taskId, userId } = event.payload;
// Step 1: Elicit requirements
const requirements = await step.do("elicit-requirements", async () => {
return await this.elicitRequirements(taskId);
});
// Step 2: Generate plan
const plan = await step.do("generate-plan", async () => {
return await this.generatePlan(taskId, requirements);
});
// Step 3: Wait for approval (human-in-the-loop)
const approval = await step.waitForEvent<ApprovalEvent>("approval", {
timeout: "24 hours"
});
if (!approval.approved) {
return { status: "rejected", reason: approval.reason };
}
// Step 4: Execute implementation
const implementation = await step.do("implement", async () => {
return await this.implementTask(taskId, plan);
});
// Step 5: Run validation
const validation = await step.do("validate", async () => {
return await this.validateImplementation(taskId, implementation);
});
// Step 6: Sleep before final check
await step.sleep("wait-for-ci", "5 minutes");
// Step 7: Final verification
const ciResult = await step.do("check-ci", async () => {
return await this.checkCIStatus(taskId);
});
return {
status: ciResult.passed ? "completed" : "failed",
taskId,
results: {
requirements,
plan,
implementation,
validation,
ci: ciResult
}
};
}
async elicitRequirements(taskId: string): Promise<Requirements> {
// AI agent interaction
return { /* ... */ };
}
// ... other methods
}Triggering Workflows
typescript
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const { taskId, userId } = await request.json();
// Start workflow
const instance = await env.TASK_WORKFLOW.create({
params: { taskId, userId }
});
return Response.json({
workflowId: instance.id,
status: "started"
});
}
};Workflow Events (Human-in-the-Loop)
typescript
// Send event to waiting workflow
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const { workflowId, approved, reason } = await request.json();
const instance = await env.TASK_WORKFLOW.get(workflowId);
await instance.sendEvent("approval", {
approved,
reason
});
return Response.json({ success: true });
}
};State Persistence
Workflows automatically persist state between steps:
- No manual checkpointing required
- Automatic retries on failures
- Resume from last completed step
- Survive infrastructure failures
Use Cases for AI Agents
Multi-phase agent execution:
- Elicitation → Planning → Implementation → Validation
- State persisted between phases
- Human review checkpoints with
waitForEvent()
Long-running analysis:
- Data collection spanning hours
- Periodic progress updates
- Automatic retries on transient failures
Orchestration:
- Coordinate multiple specialized agents
- Chain agent outputs as inputs
- Parallel agent execution with
step.parallel()
Scheduled agent tasks:
- Periodic data processing
- Scheduled report generation
- Time-delayed follow-ups
Error Handling and Logging
Console Logging
typescript
export default {
async fetch(request: Request, env: Env): Promise<Response> {
// Basic logging
console.log("Request received:", request.url);
console.info("Processing task");
console.warn("Rate limit approaching");
console.error("Failed to process:", error);
// Structured logging
console.log(JSON.stringify({
level: "info",
timestamp: Date.now(),
requestId: crypto.randomUUID(),
path: new URL(request.url).pathname,
method: request.method
}));
return Response.json({ success: true });
}
};Error Handling Patterns
typescript
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
try {
const result = await processRequest(request, env);
return Response.json(result);
} catch (error) {
// Log error details
console.error("Request failed:", {
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
url: request.url,
method: request.method
});
// Return appropriate error response
if (error instanceof ValidationError) {
return Response.json(
{ error: error.message },
{ status: 400 }
);
}
if (error instanceof AuthenticationError) {
return Response.json(
{ error: "Unauthorized" },
{ status: 401 }
);
}
// Generic error response
return Response.json(
{ error: "Internal server error" },
{ status: 500 }
);
}
}
};
// Custom error classes
class ValidationError extends Error {
constructor(message: string) {
super(message);
this.name = "ValidationError";
}
}
class AuthenticationError extends Error {
constructor(message: string) {
super(message);
this.name = "AuthenticationError";
}
}ctx.passThroughOnException()
typescript
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
// Forward to origin if Worker throws
ctx.passThroughOnException();
// Your Worker logic
return processRequest(request, env);
}
};ctx.waitUntil() for Background Tasks
typescript
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
// Respond immediately
const response = Response.json({ success: true });
// Continue processing after response sent
ctx.waitUntil(async () => {
try {
await logAnalytics(request, env);
await updateCache(request, env);
} catch (error) {
console.error("Background task failed:", error);
}
}());
return response;
}
};Workers Logpush
Export logs to external systems:
- Cloudflare R2
- Amazon S3
- Google Cloud Storage
- Datadog
- Splunk
- New Relic
Configuration (via Dashboard or API):
- Select destination service
- Configure authentication
- Filter log fields
- Set sampling rate
Tail Workers (Beta)
Process logs from other Workers in real-time:
typescript
export default {
async tail(events: TraceItem[], env: Env): Promise<void> {
for (const event of events) {
// Filter errors
if (event.outcome === "exception" || event.outcome === "exceededCpu") {
// Send to monitoring system
await sendToSentry({
error: event.exceptions[0],
scriptName: event.scriptName,
timestamp: event.eventTimestamp
});
}
// Log all console messages
for (const log of event.logs) {
console.log(`[${event.scriptName}] ${log.message}`);
}
}
}
};Monitoring Best Practices
- Structured logging: Use JSON for machine-readable logs
- Request IDs: Track requests across services
- Error context: Include stack traces and relevant data
- Sampling: Log subset of requests in high-traffic scenarios
- External monitoring: Integrate with Sentry, Datadog, etc.
- Metrics: Track success/failure rates, latency, throughput
- Alerting: Set up alerts for error rate spikes
Performance Limits and Quotas
Worker Execution Limits
| Limit | Free Tier | Paid Tier | Notes |
|---|---|---|---|
| CPU Time | 10ms | 30ms (default), configurable to 5 minutes | Per request, includes all subrequests |
| Duration | No limit | No limit | Wall-clock time, but CPU time enforced |
| Memory | 128 MB | 128 MB | Per Worker instance |
| Script Size | 1 MB | 10 MB | After compression |
| Environment Variables | 64 KB | 5 KB per var, 64 KB total | - |
Request Limits
| Limit | Free Tier | Paid Tier |
|---|---|---|
| Daily Requests | 100,000 | Unlimited (pay-per-use) |
| Burst Rate | 1,000 req/min | Unlimited |
| Request Size | 100 MB | 500 MB |
| Response Size | 100 MB | 500 MB |
| Subrequests | 50 per request | 1,000 per request |
| Subrequest Duration | 30 seconds | 30 seconds |
Durable Objects Limits
| Limit | Value | Notes |
|---|---|---|
| Storage per Object | 10 GB | SQLite-backed storage |
| Storage per Account | 5 GB (Free), Unlimited (Paid) | - |
| CPU Time | 30s (default), 5 min (configurable) | Resets per HTTP/WebSocket message |
| Request Throughput | ~1,000 req/sec per object | Soft limit |
| WebSocket Message Size | 1 MiB | Received messages |
| Key/Value Size | 2 MB combined | Legacy KV storage |
| SQL Row Size | 2 MB | SQLite backend |
| SQL Columns | 100 per table | - |
| SQL Statement Length | 100 KB | - |
| Durable Object Classes | 100 (Free), 500 (Paid) | - |
KV Limits
| Limit | Value |
|---|---|
| Key Size | 512 bytes (UTF-8) |
| Value Size | 25 MB |
| Metadata Size | 1,024 bytes |
| Keys per Namespace | Unlimited |
| List Operations | 1,000 keys per call |
| Write Rate | 1 write/sec per key (eventually consistent) |
| Read Rate | Unlimited |
Queue Limits
| Limit | Value |
|---|---|
| Message Size | 128 KB |
| Batch Size | 100 messages or 256 KB |
| Max Delay | 43,200 seconds (12 hours) |
| Max Batch Timeout | 30 seconds |
| Max Concurrency | 10 concurrent batches |
Miscellaneous Limits
| Limit | Free Tier | Paid Tier |
|---|---|---|
| Routes per Zone | 1,000 | 1,000 |
| Custom Domains per Zone | - | 100 |
| Simultaneous Connections | - | - |
| WebSocket Connections | No hard limit | No hard limit |
Quota Recommendations
For MonoTask migration:
- Paid tier required: AI agent execution likely exceeds 10ms CPU time
- Configure CPU time: Set to 5 minutes for long-running agent tasks
- Durable Objects: Use for task queue management and state
- Queues: Batch task processing to optimize throughput
- KV: Session management and caching
- Monitor usage: Track CPU time and request counts
Cold Start Mitigation Strategies
Understanding Cold Starts
Workers have near-instant cold starts (milliseconds) due to V8 isolates, but optimization is still important.
Cold start triggers:
- First request after deployment
- Geographic distribution (new edge location)
- Eviction due to inactivity
Optimization Strategies
1. Minimize Module Imports
typescript
// BAD: Large bundle size
import _ from "lodash";
import moment from "moment";
// GOOD: Import only what you need
import { debounce } from "lodash-es";
import { format } from "date-fns";2. Lazy Loading
typescript
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/complex") {
// Load module only when needed
const { processComplex } = await import("./complex-handler");
return processComplex(request, env);
}
// Fast path doesn't load heavy modules
return Response.json({ status: "ok" });
}
};3. Reduce Script Size
bash
# Use esbuild or Rollup for tree-shaking
esbuild src/index.ts --bundle --minify --outfile=dist/worker.js
# Wrangler automatically bundles and minifies
wrangler deploy4. Avoid Heavy Initialization
typescript
// BAD: Heavy computation at module level
const largeConfig = computeExpensiveConfig();
export default {
async fetch(request: Request): Promise<Response> {
return Response.json(largeConfig);
}
};
// GOOD: Lazy initialization
let cachedConfig: Config | null = null;
export default {
async fetch(request: Request): Promise<Response> {
if (!cachedConfig) {
cachedConfig = await loadConfig();
}
return Response.json(cachedConfig);
}
};5. Use Durable Objects for Warm State
typescript
// Durable Object stays warm across requests
export class WarmCache extends DurableObject {
private cache: Map<string, any> = new Map();
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
// Load data once, reuse across requests
ctx.blockConcurrencyWhile(async () => {
const stored = await this.ctx.storage.get("cache");
this.cache = stored || new Map();
});
}
async fetch(request: Request): Promise<Response> {
// In-memory cache persists across requests
return Response.json(Array.from(this.cache.entries()));
}
}6. Smart Placement (Durable Objects)
typescript
// Create Durable Object near users
const id = env.CACHE.idFromName("user-123", {
locationHint: "enam" // Eastern North America
});7. Pre-warm Critical Paths
bash
# Deploy with health check endpoint
curl https://your-worker.workers.dev/health
# Pre-warm multiple regions
for region in us-east us-west eu-west; do
curl https://your-worker.workers.dev/health
done8. Use Compatibility Dates Wisely
toml
# Stay current to benefit from runtime optimizations
compatibility_date = "2024-01-01"Cold Start Impact by Architecture
| Pattern | Cold Start Impact | Mitigation |
|---|---|---|
| Simple API | Minimal (<10ms) | No action needed |
| Heavy imports | Medium (10-50ms) | Tree-shake, lazy load |
| Durable Objects | Low (isolate reuse) | Keep objects active |
| Service Bindings | Minimal (same thread) | No action needed |
| External APIs | N/A (network bound) | Use KV caching |
Monitoring Cold Starts
typescript
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const startTime = Date.now();
// Your logic here
const result = await processRequest(request, env);
const duration = Date.now() - startTime;
// Log potential cold starts
if (duration > 100) {
console.warn("Slow request detected:", {
duration,
path: new URL(request.url).pathname,
possibleColdStart: true
});
}
return result;
}
};Code Examples and Patterns
Complete API Endpoint Pattern
typescript
import { Hono } from "hono";
import { cors } from "hono/cors";
import { logger } from "hono/logger";
export interface Env {
DB: D1Database;
KV: KVNamespace;
TASK_QUEUE: DurableObjectNamespace<TaskQueue>;
ANTHROPIC_API_KEY: string;
}
const app = new Hono<{ Bindings: Env }>();
// Middleware
app.use("*", cors());
app.use("*", logger());
// Error handler
app.onError((err, c) => {
console.error("Request failed:", err);
return c.json({ error: "Internal server error" }, 500);
});
// Routes
app.get("/", (c) => c.json({ status: "ok" }));
app.get("/tasks", async (c) => {
const tasks = await c.env.DB
.prepare("SELECT * FROM tasks ORDER BY created_at DESC LIMIT 100")
.all();
return c.json(tasks.results);
});
app.post("/tasks", async (c) => {
const body = await c.req.json<{ title: string; description: string }>();
const id = crypto.randomUUID();
await c.env.DB
.prepare("INSERT INTO tasks (id, title, description, status) VALUES (?, ?, ?, ?)")
.bind(id, body.title, body.description, "pending")
.run();
// Enqueue for processing
const queueId = c.env.TASK_QUEUE.idFromName("main-queue");
const queue = c.env.TASK_QUEUE.get(queueId);
await queue.enqueue({ taskId: id });
return c.json({ id, status: "created" }, 201);
});
app.get("/tasks/:id", async (c) => {
const id = c.req.param("id");
// Check cache first
const cached = await c.env.KV.get(`task:${id}`, "json");
if (cached) {
return c.json(cached);
}
// Fetch from database
const task = await c.env.DB
.prepare("SELECT * FROM tasks WHERE id = ?")
.bind(id)
.first();
if (!task) {
return c.json({ error: "Not found" }, 404);
}
// Cache for 5 minutes
await c.env.KV.put(`task:${id}`, JSON.stringify(task), {
expirationTtl: 300
});
return c.json(task);
});
export default app;Task Queue with Durable Objects (Complete Example)
typescript
import { DurableObject } from "cloudflare:workers";
export interface Env {
TASK_QUEUE: DurableObjectNamespace<TaskQueue>;
TASK_PROCESSOR: Queue;
}
interface Task {
id: string;
type: string;
payload: any;
priority: number;
createdAt: number;
}
export class TaskQueue extends DurableObject {
private processing: boolean = false;
async enqueue(task: Task): Promise<void> {
// Get current queue
const queue = await this.ctx.storage.get<Task[]>("queue") || [];
// Add task with priority
queue.push(task);
queue.sort((a, b) => b.priority - a.priority);
await this.ctx.storage.put("queue", queue);
// Trigger processing
if (!this.processing) {
await this.ctx.storage.setAlarm(Date.now());
}
}
async dequeue(): Promise<Task | null> {
const queue = await this.ctx.storage.get<Task[]>("queue") || [];
if (queue.length === 0) return null;
const task = queue.shift();
await this.ctx.storage.put("queue", queue);
return task;
}
async alarm() {
this.processing = true;
try {
let task: Task | null;
let processed = 0;
// Process up to 10 tasks per alarm
while ((task = await this.dequeue()) !== null && processed < 10) {
await this.processTask(task);
processed++;
}
// Schedule next alarm if more tasks
const queue = await this.ctx.storage.get<Task[]>("queue") || [];
if (queue.length > 0) {
await this.ctx.storage.setAlarm(Date.now() + 1000);
}
} finally {
this.processing = false;
}
}
async processTask(task: Task): Promise<void> {
console.log(`Processing task ${task.id} of type ${task.type}`);
// Actual task execution logic
switch (task.type) {
case "elicitation":
await this.runElicitation(task.payload);
break;
case "implementation":
await this.runImplementation(task.payload);
break;
default:
console.warn(`Unknown task type: ${task.type}`);
}
}
async runElicitation(payload: any): Promise<void> {
// AI agent execution
}
async runImplementation(payload: any): Promise<void> {
// AI agent execution
}
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/enqueue" && request.method === "POST") {
const task = await request.json<Task>();
await this.enqueue(task);
return Response.json({ success: true });
}
if (url.pathname === "/status") {
const queue = await this.ctx.storage.get<Task[]>("queue") || [];
return Response.json({
queueLength: queue.length,
processing: this.processing
});
}
return new Response("Not Found", { status: 404 });
}
}
// Worker that uses the queue
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const queueId = env.TASK_QUEUE.idFromName("main-queue");
const queue = env.TASK_QUEUE.get(queueId);
return queue.fetch(request);
}
};WebSocket Real-time Updates
typescript
export class RealtimeUpdates extends DurableObject {
private sessions: Map<string, WebSocket> = new Map();
async fetch(request: Request): Promise<Response> {
if (request.headers.get("Upgrade") !== "websocket") {
return new Response("Expected WebSocket", { status: 426 });
}
const pair = new WebSocketPair();
const [client, server] = [pair[0], pair[1]];
const sessionId = crypto.randomUUID();
server.accept();
this.sessions.set(sessionId, server);
server.addEventListener("message", async (event) => {
const message = JSON.parse(event.data as string);
if (message.type === "subscribe") {
// Subscribe to task updates
await this.subscribeToTask(sessionId, message.taskId);
}
});
server.addEventListener("close", () => {
this.sessions.delete(sessionId);
});
return new Response(null, {
status: 101,
webSocket: client,
});
}
async subscribeToTask(sessionId: string, taskId: string) {
// Store subscription
const subscriptions = await this.ctx.storage.get<Map<string, Set<string>>>("subscriptions") || new Map();
if (!subscriptions.has(taskId)) {
subscriptions.set(taskId, new Set());
}
subscriptions.get(taskId)!.add(sessionId);
await this.ctx.storage.put("subscriptions", subscriptions);
}
async broadcastTaskUpdate(taskId: string, update: any) {
const subscriptions = await this.ctx.storage.get<Map<string, Set<string>>>("subscriptions") || new Map();
const subscribers = subscriptions.get(taskId) || new Set();
const message = JSON.stringify({
type: "task-update",
taskId,
update
});
for (const sessionId of subscribers) {
const socket = this.sessions.get(sessionId);
if (socket) {
socket.send(message);
}
}
}
}Service Binding for Microservices
typescript
// auth-service/src/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export class AuthService extends WorkerEntrypoint<Env> {
async verifyToken(token: string): Promise<User | null> {
// Verify JWT token
const payload = await this.decodeJWT(token);
if (!payload) return null;
return {
id: payload.sub,
email: payload.email,
role: payload.role
};
}
async createToken(userId: string): Promise<string> {
// Generate JWT
return this.signJWT({ sub: userId });
}
private async decodeJWT(token: string): Promise<any> {
// JWT verification logic
}
private async signJWT(payload: any): Promise<string> {
// JWT signing logic
}
}
export default AuthService;
// api-service/src/index.ts
export interface Env {
AUTH: Service<AuthService>;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const token = request.headers.get("Authorization")?.replace("Bearer ", "");
if (!token) {
return new Response("Unauthorized", { status: 401 });
}
// Call auth service via RPC
const user = await env.AUTH.verifyToken(token);
if (!user) {
return new Response("Invalid token", { status: 401 });
}
return Response.json({ user });
}
};Summary & Recommendations for MonoTask
Architecture Mapping
| MonoTask Component | Cloudflare Service | Notes |
|---|---|---|
| Dashboard API | Workers (Hono framework) | HTTP endpoints, REST API |
| Daemon/Queue | Durable Objects + Queues | Task queue management, state machine |
| AI Agents | Workers + Workflows | Agent execution with state persistence |
| Database | D1 (SQLite) or external DB | Relational data storage |
| Session Management | KV | User sessions, cache |
| WebSocket Updates | Durable Objects + WebSocket | Real-time task updates |
| Background Jobs | Scheduled Events + Queues | Cron tasks, batch processing |
| Task State | Durable Objects | State machine coordination |
Migration Strategy
Phase 1: API Layer
- Migrate dashboard API to Workers using Hono
- Use D1 for database or keep external PostgreSQL
- KV for session management and caching
- Service bindings for microservices
Phase 2: Queue System
- Implement task queue with Durable Objects
- Use Alarms API for scheduled processing
- Migrate from daemon to Cloudflare Queues
Phase 3: Agent Execution
- Workers for short-running agents (<30s CPU time)
- Workflows for multi-phase agents (elicitation → planning → implementation)
- Durable Objects for agent state coordination
Phase 4: Real-time Features
- WebSocket server with Durable Objects
- Hibernation API for scaling
- Broadcast task updates to connected clients
Phase 5: Background Jobs
- Scheduled events for cron tasks
- Queue consumers for async processing
- Workflows for long-running operations
Performance Considerations
- CPU Time: Configure 5-minute limit for AI agent execution
- Paid Tier: Required for production usage
- Geographic Distribution: Use location hints for Durable Objects
- Caching: Aggressive KV caching to reduce database load
- Batching: Use Queues for efficient batch processing
Cost Optimization
- Free Tier: 100,000 requests/day (development only)
- Paid Tier: $5/month + usage
- $0.50 per million requests
- $0.02 per million Durable Object requests
- KV: $0.50/GB storage, $0.50/million reads
- Queues: $0.40 per million operations
Estimated MonoTask costs (moderate usage):
- API requests: ~$2-5/month
- Durable Objects: ~$5-10/month
- KV: ~$1-2/month
- Total: ~$10-20/month for moderate usage
Additional Resources
- Official Docs: https://developers.cloudflare.com/workers/
- Examples: https://developers.cloudflare.com/workers/examples/
- Discord: Cloudflare Developers Discord
- GitHub: https://github.com/cloudflare/workers-sdk
Research compiled: 2025-10-25
Target system: MonoTask AI Agent Automation Platform
Focus: Migration to Cloudflare Workers for serverless AI agent orchestration