# Cronicorn Cronicorn is an intelligent HTTP scheduler that automatically adapts to your application's behavior. It replaces traditional cron jobs with AI-powered scheduling that adjusts execution frequency based on real response data, error patterns, and system load. Set baseline schedules, describe what matters in plain English, and let the AI optimize timing automatically while respecting your safety constraints. The platform provides three interfaces for managing scheduled jobs: a Web UI at cronicorn.com, an MCP Server for AI assistants (Claude, Copilot, Cursor), and a REST API for programmatic access. Cronicorn works perfectly without AI as a reliable baseline scheduler, but when AI is enabled, it analyzes endpoint responses and adapts frequency—tightening monitoring during incidents and relaxing when stable. All AI suggestions have TTLs (time-to-live) and automatically revert to baseline, ensuring the system is self-healing. ## Authentication Cronicorn supports two authentication methods: API keys for programmatic access and OAuth Device Flow for CLI tools and AI agents. ```bash # API Key Authentication # Generate keys in the web UI at /settings/api-keys curl -H "x-api-key: cron_abc123..." \ https://api.cronicorn.com/api/jobs # OAuth Device Flow for CLI/AI tools # Step 1: Request device code curl -X POST https://api.cronicorn.com/api/auth/device/code # Response: { "device_code": "DEVICE_CODE", "user_code": "ABCD-1234", # "verification_uri": "https://cronicorn.com/device", "expires_in": 1800 } # Step 2: User authorizes in browser at verification_uri # Step 3: Poll for token curl -X POST https://api.cronicorn.com/api/auth/device/token \ -H "Content-Type: application/json" \ -d '{"device_code": "DEVICE_CODE"}' # Response: { "access_token": "eyJ...", "token_type": "Bearer", "expires_in": 2592000 } # Step 4: Use bearer token in requests curl -H "Authorization: Bearer eyJ..." \ https://api.cronicorn.com/api/jobs ``` ## Create Job Jobs are containers that group related endpoints together. Endpoints in the same job can coordinate through sibling visibility—the AI sees all their responses for multi-endpoint workflows. ```bash curl -X POST https://api.cronicorn.com/api/jobs \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "API Health Checks", "description": "Monitor our production APIs with adaptive frequency" }' # Response: # { # "id": "job_abc123", # "name": "API Health Checks", # "description": "Monitor our production APIs with adaptive frequency", # "status": "active", # "createdAt": "2026-02-03T12:00:00Z" # } ``` ## List Jobs Retrieve all jobs for your account with optional status filtering. ```bash # List all active jobs curl -H "x-api-key: YOUR_API_KEY" \ "https://api.cronicorn.com/api/jobs?status=active" # Get a specific job curl -H "x-api-key: YOUR_API_KEY" \ https://api.cronicorn.com/api/jobs/job_abc123 # Update a job curl -X PATCH https://api.cronicorn.com/api/jobs/job_abc123 \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Production API Health Checks", "description": "Updated description" }' ``` ## Job Lifecycle Management Control job execution state with pause, resume, and archive operations. ```bash # Pause a job (stops all endpoint executions) curl -X POST https://api.cronicorn.com/api/jobs/job_abc123/pause \ -H "x-api-key: YOUR_API_KEY" # Resume a paused job curl -X POST https://api.cronicorn.com/api/jobs/job_abc123/resume \ -H "x-api-key: YOUR_API_KEY" # Archive a job (permanent, removes from active list) curl -X POST https://api.cronicorn.com/api/jobs/job_abc123/archive \ -H "x-api-key: YOUR_API_KEY" ``` ## Add Endpoint Endpoints are HTTP requests that run on a schedule. Configure URL, method, baseline schedule (cron or interval), constraints, and a natural language description for AI adaptation. ```bash curl -X POST https://api.cronicorn.com/api/jobs/job_abc123/endpoints \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Main API Health", "url": "https://api.example.com/health", "method": "GET", "baselineIntervalMs": 300000, "minIntervalMs": 30000, "maxIntervalMs": 900000, "timeoutMs": 30000, "description": "Monitors API health. Poll more frequently when status shows errors or error_rate_pct > 5%. Return to baseline when metrics normalize and error_rate_pct < 2%." }' # Response: # { # "id": "ep_xyz789", # "name": "Main API Health", # "jobId": "job_abc123", # "baselineIntervalMs": 300000, # "minIntervalMs": 30000, # "maxIntervalMs": 900000, # "createdAt": "2026-02-03T12:00:00Z" # } ``` ## Endpoint Configuration Fields Complete reference for endpoint configuration options accepted by all interfaces (API, MCP Server, Web UI). ```json { "name": "api-health-check", "url": "https://api.example.com/health", "method": "GET", "baselineIntervalMs": 300000, "minIntervalMs": 30000, "maxIntervalMs": 900000, "timeoutMs": 10000, "headersJson": { "Authorization": "Bearer token" }, "bodyJson": null, "description": "Monitors API health. Poll more frequently when errors are detected." } // Field reference: // name (required): What this endpoint does // url (required): HTTP endpoint to call // method (required): GET, POST, PUT, PATCH, or DELETE // baselineIntervalMs: Milliseconds between runs (OR baselineCron) // baselineCron: Cron expression e.g., "*/5 * * * *" (OR baselineIntervalMs) // minIntervalMs: Minimum allowed interval (safety floor) // maxIntervalMs: Maximum allowed interval (freshness guarantee) // timeoutMs: Request timeout (default: 30000ms) // headersJson: Custom HTTP headers object // bodyJson: Request body for POST/PUT/PATCH // description: Natural language instructions for AI adaptation ``` ## List and Get Endpoints Retrieve endpoints for a job or get details for a specific endpoint. ```bash # List all endpoints in a job curl -H "x-api-key: YOUR_API_KEY" \ https://api.cronicorn.com/api/jobs/job_abc123/endpoints # Get specific endpoint details curl -H "x-api-key: YOUR_API_KEY" \ https://api.cronicorn.com/api/jobs/job_abc123/endpoints/ep_xyz789 # Update an endpoint curl -X PATCH https://api.cronicorn.com/api/jobs/job_abc123/endpoints/ep_xyz789 \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "baselineIntervalMs": 600000, "description": "Updated monitoring interval" }' ``` ## Pause and Resume Endpoint Temporarily pause an endpoint with an optional resume time. ```bash # Pause until a specific time curl -X POST https://api.cronicorn.com/api/endpoints/ep_xyz789/pause \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "pausedUntil": "2026-02-03T14:00:00Z", "reason": "Maintenance window" }' # Resume immediately (set pausedUntil to null) curl -X POST https://api.cronicorn.com/api/endpoints/ep_xyz789/pause \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "pausedUntil": null, "reason": "Maintenance complete" }' # Archive an endpoint curl -X POST https://api.cronicorn.com/api/jobs/job_abc123/endpoints/ep_xyz789/archive \ -H "x-api-key: YOUR_API_KEY" ``` ## Apply AI Interval Hint Temporarily adjust execution frequency. Hints override the baseline schedule and expire automatically (TTL), reverting to baseline. ```bash # Tighten monitoring to every 30 seconds for 1 hour curl -X POST https://api.cronicorn.com/api/endpoints/ep_xyz789/hints/interval \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "intervalMs": 30000, "ttlMinutes": 60, "reason": "Increased monitoring during incident" }' # Response: # { "success": true, "intervalMs": 30000, "expiresAt": "2026-02-03T13:00:00Z" } ``` ## Schedule One-Shot Execution Trigger a specific one-time execution at a given time or immediately. ```bash # Schedule immediate execution curl -X POST https://api.cronicorn.com/api/endpoints/ep_xyz789/hints/oneshot \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "nextRunAt": "2026-02-03T12:30:00Z", "ttlMinutes": 30, "reason": "Immediate investigation after deployment" }' ``` ## Clear Hints and Reset Failures Return to baseline schedule and clear failure tracking. ```bash # Clear all AI hints, revert to baseline schedule curl -X DELETE https://api.cronicorn.com/api/endpoints/ep_xyz789/hints \ -H "x-api-key: YOUR_API_KEY" # Reset failure count (clears exponential backoff) curl -X POST https://api.cronicorn.com/api/endpoints/ep_xyz789/reset-failures \ -H "x-api-key: YOUR_API_KEY" ``` ## List Endpoint Runs Get execution history for an endpoint with optional filtering. ```bash # Get recent runs with status filter curl -H "x-api-key: YOUR_API_KEY" \ "https://api.cronicorn.com/api/endpoints/ep_xyz789/runs?limit=20&status=failed" # Response: # { # "runs": [ # { # "id": "run_abc123", # "endpointId": "ep_xyz789", # "status": "failed", # "statusCode": 503, # "durationMs": 1200, # "error": "Service Unavailable", # "startedAt": "2026-02-03T12:00:00Z" # } # ] # } ``` ## Get Run Details Retrieve complete details for a specific execution run including response body. ```bash curl -H "x-api-key: YOUR_API_KEY" \ https://api.cronicorn.com/api/runs/run_abc123 # Response: # { # "id": "run_abc123", # "endpointId": "ep_xyz789", # "status": "success", # "statusCode": 200, # "durationMs": 145, # "responseBody": { "healthy": true, "queue_depth": 45 }, # "startedAt": "2026-02-03T12:00:00Z", # "completedAt": "2026-02-03T12:00:00.145Z", # "source": "baseline-interval" # } ``` ## Get Endpoint Health Get aggregated health metrics for an endpoint over a time window. ```bash curl -H "x-api-key: YOUR_API_KEY" \ "https://api.cronicorn.com/api/endpoints/ep_xyz789/health?sinceHours=24" # Response: # { # "endpointId": "ep_xyz789", # "successCount": 285, # "failureCount": 3, # "successRate": 98.96, # "avgDurationMs": 142, # "lastRunAt": "2026-02-03T12:00:00Z", # "lastStatus": "success", # "failureStreak": 0 # } ``` ## Get Dashboard Stats Get aggregated statistics across all jobs and endpoints. ```bash curl -H "x-api-key: YOUR_API_KEY" \ "https://api.cronicorn.com/api/dashboard?startDate=2026-02-01&endDate=2026-02-03" ``` ## AI Analysis Sessions View AI decision history to understand why scheduling changes were made. ```bash # List analysis sessions for an endpoint curl -H "x-api-key: YOUR_API_KEY" \ "https://api.cronicorn.com/api/endpoints/ep_xyz789/analysis-sessions?limit=10" # Response: # { # "sessions": [ # { # "id": "session_abc123", # "endpointId": "ep_xyz789", # "createdAt": "2026-02-03T12:00:00Z", # "reasoning": "Queue depth increased from 50 to 200 over last 5 runs. Tightening monitoring interval from 5 minutes to 30 seconds.", # "toolsCalled": ["get_response_history", "propose_interval"], # "tokenUsage": 1250, # "durationMs": 3200 # } # ], # "total": 45, # "hasMore": true # } # Get details for a specific session curl -H "x-api-key: YOUR_API_KEY" \ https://api.cronicorn.com/api/analysis-sessions/session_abc123 ``` ## JavaScript/TypeScript: Complete Health Monitoring Setup Create a job with an adaptive health monitoring endpoint using the Fetch API. ```javascript const API_KEY = 'YOUR_API_KEY'; const BASE_URL = 'https://api.cronicorn.com'; async function createHealthMonitoringJob() { // Step 1: Create the job const jobResponse = await fetch(`${BASE_URL}/api/jobs`, { method: 'POST', headers: { 'x-api-key': API_KEY, 'Content-Type': 'application/json' }, body: JSON.stringify({ name: 'Production API Monitoring', description: 'Monitors API health with adaptive frequency during incidents' }) }); if (!jobResponse.ok) { const error = await jobResponse.json(); throw new Error(`Failed to create job: ${error.error?.message}`); } const job = await jobResponse.json(); console.log('Created job:', job.id); // Step 2: Add endpoint with AI-adaptive description const endpointResponse = await fetch(`${BASE_URL}/api/jobs/${job.id}/endpoints`, { method: 'POST', headers: { 'x-api-key': API_KEY, 'Content-Type': 'application/json' }, body: JSON.stringify({ name: 'api-health-check', url: 'https://api.example.com/health', method: 'GET', baselineIntervalMs: 300000, // 5 minutes minIntervalMs: 30000, // 30 seconds floor maxIntervalMs: 900000, // 15 minutes ceiling timeoutMs: 10000, description: 'Monitor API health. When error_rate_pct > 5%, tighten polling to 30 seconds. Return to baseline when error_rate_pct < 2% and status is healthy.' }) }); if (!endpointResponse.ok) { const error = await endpointResponse.json(); throw new Error(`Failed to create endpoint: ${error.error?.message}`); } const endpoint = await endpointResponse.json(); console.log('Created endpoint:', endpoint.id); return { job, endpoint }; } // Helper functions for runtime management async function tightenMonitoring(endpointId, intervalMs = 30000, ttlMinutes = 60) { const response = await fetch(`${BASE_URL}/api/endpoints/${endpointId}/hints/interval`, { method: 'POST', headers: { 'x-api-key': API_KEY, 'Content-Type': 'application/json' }, body: JSON.stringify({ intervalMs, ttlMinutes, reason: `Manual override: ${intervalMs}ms for ${ttlMinutes} min` }) }); if (!response.ok) throw new Error(`Failed to apply hint: ${response.statusText}`); } async function returnToBaseline(endpointId) { await fetch(`${BASE_URL}/api/endpoints/${endpointId}/hints`, { method: 'DELETE', headers: { 'x-api-key': API_KEY } }); } async function checkHealth(endpointId, sinceHours = 24) { const response = await fetch(`${BASE_URL}/api/endpoints/${endpointId}/health?sinceHours=${sinceHours}`, { headers: { 'x-api-key': API_KEY } }); return response.json(); } createHealthMonitoringJob().catch(console.error); ``` ## Python: Create Job and Dynamic Scheduling Python implementation for creating jobs and adjusting scheduling based on external metrics. ```python import requests API_KEY = "YOUR_API_KEY" BASE_URL = "https://api.cronicorn.com" HEADERS = {"x-api-key": API_KEY, "Content-Type": "application/json"} # Create job and endpoint job_response = requests.post( f"{BASE_URL}/api/jobs", headers=HEADERS, json={ "name": "Production API Monitoring", "description": "Monitors API health with adaptive frequency" } ) job_response.raise_for_status() job = job_response.json() print(f"Created job: {job['id']}") endpoint_response = requests.post( f"{BASE_URL}/api/jobs/{job['id']}/endpoints", headers=HEADERS, json={ "name": "api-health-check", "url": "https://api.example.com/health", "method": "GET", "baselineIntervalMs": 300000, "minIntervalMs": 30000, "maxIntervalMs": 900000, "timeoutMs": 10000, "description": "Poll every 30s when status is degraded or error_rate_pct > 5%. Return to baseline when healthy." } ) endpoint_response.raise_for_status() endpoint = endpoint_response.json() print(f"Created endpoint: {endpoint['id']}") # Dynamic scheduling based on external metrics def adjust_scheduling_from_metrics(endpoint_id): """Check external metrics and apply Cronicorn scheduling hints.""" metrics = requests.get("https://your-monitoring.com/api/metrics").json() error_rate = metrics.get("error_rate_pct", 0) cpu_load = metrics.get("cpu_load_pct", 0) if error_rate > 10 or cpu_load > 90: # Critical: tighten to 30 seconds requests.post( f"{BASE_URL}/api/endpoints/{endpoint_id}/hints/interval", headers=HEADERS, json={"intervalMs": 30000, "ttlMinutes": 30, "reason": f"Critical: error={error_rate}%, cpu={cpu_load}%"} ) print(f"Tightened to 30s") elif error_rate > 5: # Warning: tighten to 1 minute requests.post( f"{BASE_URL}/api/endpoints/{endpoint_id}/hints/interval", headers=HEADERS, json={"intervalMs": 60000, "ttlMinutes": 15, "reason": f"Warning: error_rate={error_rate}%"} ) else: # Normal: clear hints requests.delete(f"{BASE_URL}/api/endpoints/{endpoint_id}/hints", headers={"x-api-key": API_KEY}) print("Normal: returned to baseline") ``` ## MCP Server Setup Install and configure the MCP Server to manage Cronicorn through AI assistants like Claude, Copilot, or Cursor. ```bash # Install globally npm install -g @cronicorn/mcp-server # Claude Code CLI claude mcp add cronicorn -- npx -y @cronicorn/mcp-server # Claude Desktop / Copilot / Cursor (add to config file) # { # "mcpServers": { # "cronicorn": { # "command": "npx", # "args": ["-y", "@cronicorn/mcp-server"] # } # } # } # First use triggers OAuth in your browser. Approve once, stay connected for 30 days. # Example conversations: # "Check my API health every 5 minutes" # "Show me why that endpoint is failing" # "Migrate my 10 Vercel cron jobs to Cronicorn" ``` ## MCP Tool Call Examples Concrete examples of MCP tool invocations showing inputs and expected responses. ```json // createJob // Input: { "name": "Production API Monitoring", "description": "Monitors API health with adaptive frequency" } // Response: { "id": "job_abc123", "name": "Production API Monitoring", "status": "active" } // addEndpoint // Input: { "jobId": "job_abc123", "name": "api-health-check", "url": "https://api.example.com/health", "method": "GET", "baselineIntervalMs": 300000, "minIntervalMs": 30000, "description": "Poll every 30s when status is degraded. Return to baseline when healthy." } // Response: { "id": "ep_xyz789", "name": "api-health-check", "jobId": "job_abc123" } // applyIntervalHint // Input: { "id": "ep_xyz789", "intervalMs": 30000, "ttlMinutes": 60, "reason": "Incident detected" } // Response: { "success": true, "intervalMs": 30000, "expiresAt": "2026-02-03T13:00:00Z" } // getEndpointHealth // Input: { "id": "ep_xyz789", "sinceHours": 24 } // Response: { "successCount": 285, "failureCount": 3, "successRate": 98.96, "failureStreak": 0 } // listEndpointRuns // Input: { "id": "ep_xyz789", "limit": 5, "status": "failed" } // Response: { "runs": [ { "id": "run_001", "status": "failed", "statusCode": 503, "error": "Service Unavailable" } ]} // resetFailures // Input: { "id": "ep_xyz789" } // Response: { "success": true, "failureCount": 0 } ``` ## Self-Hosting with Docker Compose Run Cronicorn on your own infrastructure using Docker Compose. ```bash # Download docker-compose.yml from GitHub # Create .env file with required secrets # Minimal .env for local use: BETTER_AUTH_SECRET=your-random-32-character-secret-here # Production .env: BETTER_AUTH_SECRET=your-random-32-character-secret-here WEB_URL=https://yourdomain.com API_URL=http://cronicorn-api:3333 BETTER_AUTH_URL=https://yourdomain.com # Optional: GitHub OAuth GITHUB_CLIENT_ID=your_github_client_id GITHUB_CLIENT_SECRET=your_github_client_secret # Optional: AI Scheduling OPENAI_API_KEY=sk-your-key-here AI_MODEL=gpt-4o-mini # Start services docker compose up -d # Access: # - Web app: http://localhost:5173 # - API: http://localhost:3333 # - Default login: admin@example.com / devpassword # Useful commands docker compose logs -f # View logs docker compose logs -f api # Specific service logs docker compose restart # Restart services docker compose pull && docker compose up -d # Update to latest ``` ## Response Body Design for AI Adaptation Structure your endpoint response bodies to include fields the AI can interpret. The AI reads response bodies automatically—no parsing code needed. ```json // Health monitoring response { "status": "degraded", "error_rate_pct": 8.5, "latency_ms": 1200, "healthy": false, "timestamp": "2026-02-03T12:05:00Z" } // Data sync response with volume metrics { "records_pending": 15000, "sync_rate_per_minute": 500, "estimated_completion_minutes": 30, "status": "syncing" } // System load with inverse scaling signals { "cpu_pct": 85, "memory_pct": 72, "load_avg_1m": 4.2, "recommendation": "reduce_polling" } // Stability-focused with smoothed values { "instant_value": 523, "avg_5min": 487, "avg_1hr": 502, "trend": "stable", "within_normal_range": true } // Coordination signals for multi-endpoint workflows { "status": "error", "error_count": 15, "needs_recovery": true, "last_error": "Connection refused" } ``` ## Exponential Backoff and HTTP Status Handling Cronicorn automatically applies exponential backoff on failures and resets on success. ```bash # Built-in behavior (automatic): # - 2xx responses → Success, failure count resets to 0 # - 4xx/5xx responses → Failure, backoff applies (baseline × 2^failures, max 32×) # - Timeouts → Treated as failure # Backoff multipliers: # Failures | Multiplier | 5-min baseline becomes # 0 | 1× | 5 minutes # 1 | 2× | 10 minutes # 2 | 4× | 20 minutes # 3 | 8× | 40 minutes # 4 | 16× | 80 minutes # 5+ | 32× cap | 160 minutes # AI can override backoff during incidents: curl -X POST https://api.cronicorn.com/api/endpoints/ep_xyz789/hints/interval \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "intervalMs": 30000, "ttlMinutes": 15, "reason": "Override backoff - actively monitoring failing endpoint" }' # Manual reset to clear backoff curl -X POST https://api.cronicorn.com/api/endpoints/ep_xyz789/reset-failures \ -H "x-api-key: YOUR_API_KEY" ``` ## Multi-Endpoint Recovery Workflow Configure coordinated recovery automation where a health check endpoint triggers a recovery action endpoint. ```bash # Create job for both endpoints (same job = sibling visibility) curl -X POST https://api.cronicorn.com/api/jobs \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Service Recovery Automation" }' # Returns: { "id": "job_abc123" } # Add health-check endpoint curl -X POST https://api.cronicorn.com/api/jobs/job_abc123/endpoints \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "health-check", "url": "https://api.example.com/health", "method": "GET", "baselineIntervalMs": 300000, "minIntervalMs": 30000, "description": "Monitors service health. When needs_recovery is true, the trigger-recovery sibling should run. Tighten to 30s during errors." }' # Add recovery endpoint (sibling of health-check) curl -X POST https://api.cronicorn.com/api/jobs/job_abc123/endpoints \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "trigger-recovery", "url": "https://api.example.com/admin/restart", "method": "POST", "baselineIntervalMs": 86400000, "minIntervalMs": 300000, "description": "Recovery action. Only run when health-check sibling shows needs_recovery=true. Wait 5 minutes between attempts. Max 3 attempts before pausing 1 hour." }' # AI behavior: # 1. health-check returns {"needs_recovery": true} # 2. AI calls get_sibling_latest_responses() to see trigger-recovery # 3. AI calls propose_next_time() on trigger-recovery for immediate run # 4. AI calls propose_interval(30000) on health-check for close monitoring # 5. After recovery: AI calls clear_hints() on both → back to baseline ``` ## Summary Cronicorn is ideal for scenarios where static cron jobs fall short: monitoring systems that need adaptive frequency during incidents, data pipelines with volume-based polling, coordinated multi-endpoint workflows, and any situation where you adjust schedules manually based on metrics. The platform excels at flash sale monitoring (automatic tightening during traffic surges), ETL pipeline coordination (dependent endpoints wait for upstream success), infrastructure auto-remediation (investigating and fixing issues before paging engineers), and competitive price monitoring (respecting rate limits while intensifying during competitor sales). Integration patterns include: using the REST API for programmatic control from CI/CD and scripts, the MCP Server for conversational management through AI assistants, and the Web UI for visual monitoring and configuration. All three interfaces share the same underlying data model and support identical configurations. For production deployments, Cronicorn can be self-hosted via Docker Compose with your own reverse proxy for HTTPS, or used as a hosted service at cronicorn.com with GitHub OAuth authentication. The system is designed to be self-healing—AI hints expire automatically, endpoints continue on baseline schedules if AI becomes unavailable, and a single successful response resets all backoff logic.