Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Theme
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Create API Key
Add Docs
Daytona
https://github.com/daytonaio/daytona
Admin
Daytona provides secure and elastic infrastructure for running AI-generated code, offering fast
...
Tokens:
545,203
Snippets:
4,240
Trust Score:
9.5
Update:
5 days ago
Context
Skills
Chat
Benchmark
82.7
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Daytona Daytona is an open-source, secure, and elastic infrastructure platform for AI-generated code execution and agent workflows. It provides sandboxes—full composable computers with complete isolation, dedicated kernel, filesystem, network stack, and allocated vCPU, RAM, and disk resources. Sandboxes spin up in under 90ms and can run any code in Python, TypeScript, and JavaScript, making them ideal for AI agent architectures that need consistent, predictable environments. The platform offers comprehensive SDKs for Python, TypeScript, Go, Ruby, and Java, along with a REST API and CLI for programmatic control. Key capabilities include sandbox lifecycle management, filesystem operations, process and code execution, Git operations, Language Server Protocol (LSP) support, snapshots for persistent state, volumes for shared storage, and computer use automation for desktop interactions. Daytona can run as a managed service, self-hosted deployment, or hybrid setup with customer-managed compute. ## Daytona Client Initialization The main entry point for interacting with the Daytona platform. Initializes the client with API credentials and optional configuration. Supports both API key and JWT token authentication, with configuration via explicit parameters or environment variables. ```python from daytona import Daytona, DaytonaConfig # Using environment variables (DAYTONA_API_KEY, DAYTONA_API_URL, DAYTONA_TARGET) daytona = Daytona() # Using explicit configuration config = DaytonaConfig( api_key="your-api-key", api_url="https://app.daytona.io/api", target="us" ) daytona = Daytona(config) # TypeScript equivalent # import { Daytona } from "@daytona/sdk"; # const daytona = new Daytona({ apiKey: "YOUR_API_KEY" }); ``` ## Create Sandbox Creates a new sandbox environment with specified configuration. Supports creating from snapshots or custom Docker images with resource allocation (CPU, memory, disk, GPU). Returns a fully initialized Sandbox instance ready for code execution. ```python from daytona import Daytona, CreateSandboxFromSnapshotParams, CreateSandboxFromImageParams from daytona.common.sandbox import Resources daytona = Daytona() # Create a default Python sandbox sandbox = daytona.create() # Create from snapshot with custom configuration params = CreateSandboxFromSnapshotParams( language="python", snapshot="my-snapshot-id", env_vars={"DEBUG": "true", "API_KEY": "secret"}, auto_stop_interval=60, # Auto-stop after 60 minutes of inactivity auto_archive_interval=1440, # Auto-archive after 24 hours stopped labels={"project": "ml-pipeline", "env": "development"} ) sandbox = daytona.create(params, timeout=120) # Create from Docker image with resources from daytona import Image image = Image.base("python:3.12-slim").pip_install(["numpy", "pandas", "scikit-learn"]) params = CreateSandboxFromImageParams( image=image, resources=Resources(cpu=4, memory=8, disk=50), language="python" ) sandbox = daytona.create( params, timeout=300, on_snapshot_create_logs=lambda log: print(f"Build: {log}") ) print(f"Sandbox ID: {sandbox.id}, State: {sandbox.state}") ``` ## Execute Shell Commands Executes shell commands within a sandbox environment. Returns the exit code, standard output, and execution artifacts. Supports working directory specification, environment variables, and timeout configuration. ```python # Simple command execution response = sandbox.process.exec("echo 'Hello from Daytona!'") print(f"Output: {response.result}") print(f"Exit code: {response.exit_code}") # Command with working directory and environment response = sandbox.process.exec( command="pip install requests && python -c 'import requests; print(requests.__version__)'", cwd="/home/daytona/project", env={"PIP_QUIET": "1"}, timeout=120 ) # Complex multi-step workflow commands = [ "git clone https://github.com/user/repo.git /workspace", "cd /workspace && pip install -r requirements.txt", "cd /workspace && python main.py --config prod.yaml" ] for cmd in commands: result = sandbox.process.exec(cmd, timeout=300) if result.exit_code != 0: print(f"Command failed: {result.result}") break print(f"✓ {cmd[:50]}...") ``` ## Execute Code Directly Runs code snippets directly in the sandbox using the appropriate language runtime (Python, TypeScript, JavaScript). Automatically detects and extracts matplotlib charts from Python execution for data visualization workflows. ```python # Execute Python code response = sandbox.process.code_run(''' import json import math data = {"pi": math.pi, "e": math.e, "sqrt2": math.sqrt(2)} print(json.dumps(data, indent=2)) ''') print(response.result) # Execute with command-line arguments and environment from daytona import CodeRunParams params = CodeRunParams( argv=["--verbose", "--output", "/tmp/result.json"], env={"LOG_LEVEL": "DEBUG"} ) response = sandbox.process.code_run(''' import sys import os print(f"Arguments: {sys.argv[1:]}") print(f"Log level: {os.environ.get('LOG_LEVEL')}") ''', params=params, timeout=60) # Matplotlib chart extraction (Python) response = sandbox.process.code_run(''' import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 10, 100) y = np.sin(x) * np.exp(-x/10) plt.figure(figsize=(10, 6)) plt.plot(x, y, 'b-', linewidth=2, label='Damped sine') plt.title('Signal Analysis') plt.xlabel('Time (s)') plt.ylabel('Amplitude') plt.legend() plt.grid(True) plt.show() ''') # Access extracted chart metadata for chart in response.artifacts.charts: print(f"Chart type: {chart.type}, Title: {chart.title}") ``` ## File System Operations Comprehensive file system interface for managing files and directories within sandboxes. Supports upload, download, create, delete, move, search, and permission management with streaming for large files. ```python # Upload files sandbox.fs.upload_file(b"Hello, World!", "workspace/hello.txt") sandbox.fs.upload_file("local_data.csv", "workspace/data.csv") # From local file # Upload multiple files from daytona import FileUpload sandbox.fs.upload_files([ FileUpload(source=b'{"config": "value"}', destination="config.json"), FileUpload(source="local_script.py", destination="scripts/main.py") ]) # Download files content = sandbox.fs.download_file("workspace/results.json") data = json.loads(content.decode('utf-8')) # Download to local file (streaming for large files) sandbox.fs.download_file("workspace/large_model.pkl", "local_model.pkl") # Directory operations sandbox.fs.create_folder("workspace/output", mode="755") files = sandbox.fs.list_files("workspace") for f in files: print(f"{'[DIR]' if f.is_dir else '[FILE]'} {f.name}: {f.size} bytes") # Search and find matches = sandbox.fs.find_files("workspace", "TODO:") # grep-like search for match in matches: print(f"{match.file}:{match.line}: {match.content.strip()}") result = sandbox.fs.search_files("workspace", "*.py") # glob pattern search print(f"Found {len(result.files)} Python files") # Replace in files results = sandbox.fs.replace_in_files( files=["src/config.py", "src/main.py"], pattern="old_api_url", new_value="new_api_url" ) # File info and permissions info = sandbox.fs.get_file_info("workspace/script.sh") print(f"Size: {info.size}, Mode: {info.mode}, Modified: {info.mod_time}") sandbox.fs.set_file_permissions("workspace/script.sh", mode="755") sandbox.fs.delete_file("workspace/temp", recursive=True) ``` ## Git Operations Full Git integration for repository management within sandboxes. Supports clone, commit, push, pull, branch management, and status inspection with optional authentication for private repositories. ```python # Clone a repository sandbox.git.clone( url="https://github.com/daytonaio/daytona.git", path="workspace/daytona", branch="main" ) # Clone private repository with authentication sandbox.git.clone( url="https://github.com/company/private-repo.git", path="workspace/private", username="git-user", password="github_personal_access_token" ) # Check repository status status = sandbox.git.status("workspace/daytona") print(f"Branch: {status.current_branch}") print(f"Ahead: {status.ahead}, Behind: {status.behind}") for file_status in status.file_status: print(f" {file_status.staging} {file_status.path}") # Branch operations branches = sandbox.git.branches("workspace/daytona") print(f"Branches: {branches.branches}") sandbox.git.create_branch("workspace/daytona", "feature/new-feature") sandbox.git.checkout_branch("workspace/daytona", "feature/new-feature") # Stage, commit, and push sandbox.git.add("workspace/daytona", ["README.md", "src/"]) response = sandbox.git.commit( path="workspace/daytona", message="Add new feature implementation", author="AI Agent", email="agent@daytona.io" ) print(f"Commit SHA: {response.sha}") sandbox.git.push( path="workspace/daytona", username="git-user", password="github_token" ) # Pull latest changes sandbox.git.pull("workspace/daytona", username="git-user", password="token") ``` ## Sessions and Long-Running Processes Background session management for maintaining state between commands. Sessions enable persistent environment setup, long-running processes, and interactive workflows with real-time log streaming. ```python # Create and use a session session_id = "data-pipeline" sandbox.process.create_session(session_id) # Execute commands that maintain state from daytona import SessionExecuteRequest sandbox.process.execute_session_command( session_id, SessionExecuteRequest(command="cd /workspace && export DATA_DIR=/data") ) sandbox.process.execute_session_command( session_id, SessionExecuteRequest(command="python setup_environment.py") ) # Run async command (non-blocking) result = sandbox.process.execute_session_command( session_id, SessionExecuteRequest(command="python long_running_job.py", run_async=True) ) print(f"Started command: {result.cmd_id}") # Check command status cmd = sandbox.process.get_session_command(session_id, result.cmd_id) print(f"Exit code: {cmd.exit_code}") # Get command logs logs = sandbox.process.get_session_command_logs(session_id, result.cmd_id) print(f"Output: {logs.stdout}") print(f"Errors: {logs.stderr}") # Stream logs in real-time (async) await sandbox.process.get_session_command_logs_async( session_id, result.cmd_id, on_stdout=lambda log: print(f"[OUT] {log}"), on_stderr=lambda log: print(f"[ERR] {log}") ) # List and cleanup sessions sessions = sandbox.process.list_sessions() for session in sessions: print(f"Session: {session.session_id}, Commands: {len(session.commands)}") sandbox.process.delete_session(session_id) ``` ## PTY (Pseudo-Terminal) Sessions Interactive terminal sessions for real-time command execution with full TTY support. Enables interactive applications, shell sessions, and bidirectional communication with running processes. ```python from daytona import PtySize # Create a PTY session pty = sandbox.process.create_pty_session( id="interactive-shell", cwd="/workspace", envs={"TERM": "xterm-256color"}, pty_size=PtySize(rows=24, cols=80) ) # Send input to the terminal pty.send("echo 'Hello from PTY!'\n") pty.send("python3 -c 'name = input(\"Name: \"); print(f\"Hello, {name}!\")'\n") pty.send("AI Agent\n") # Receive output (non-blocking) output = pty.read_all() print(output) # Resize terminal pty.resize(PtySize(rows=40, cols=120)) # List PTY sessions sessions = sandbox.process.list_pty_sessions() for session in sessions: print(f"PTY: {session.id}, Active: {session.active}") # Get session info info = sandbox.process.get_pty_session_info("interactive-shell") print(f"CWD: {info.cwd}, Size: {info.cols}x{info.rows}") # Kill the PTY session pty.kill() # Or: sandbox.process.kill_pty_session("interactive-shell") ``` ## Sandbox Lifecycle Management Complete lifecycle control for sandbox instances including start, stop, archive, resize, and deletion. Supports automatic lifecycle policies and state monitoring. ```python # Get existing sandbox sandbox = daytona.get("my-sandbox-id-or-name") print(f"State: {sandbox.state}, CPU: {sandbox.cpu}, Memory: {sandbox.memory}GiB") # List sandboxes with filters result = daytona.list( labels={"project": "ml-pipeline", "env": "production"}, page=1, limit=20 ) for sbx in result.items: print(f"{sbx.name}: {sbx.state}") # Stop and start sandbox.stop(timeout=60) print(f"Stopped: {sandbox.state}") sandbox.start(timeout=120) print(f"Started: {sandbox.state}") # Archive for long-term storage (cost-effective) sandbox.stop() sandbox.archive() print(f"Archived: {sandbox.state}") # Resize resources (hot resize for CPU/memory increase) from daytona.common.sandbox import Resources sandbox.resize(Resources(cpu=8, memory=16), timeout=120) # Cold resize (requires stopped sandbox for disk changes) sandbox.stop() sandbox.resize(Resources(cpu=4, memory=8, disk=100)) sandbox.start() # Set lifecycle policies sandbox.set_autostop_interval(30) # Auto-stop after 30 min idle sandbox.set_auto_archive_interval(1440) # Auto-archive after 24 hours stopped sandbox.set_auto_delete_interval(10080) # Auto-delete after 7 days stopped # Delete sandbox daytona.delete(sandbox, timeout=60) ``` ## Snapshots Snapshot management for creating reusable sandbox templates with pre-configured environments, packages, and state. Enables fast sandbox creation from known-good configurations. ```python from daytona import Image, CreateSnapshotParams from daytona.common.sandbox import Resources # Create a snapshot from a custom image image = ( Image.debian_slim("3.12") .pip_install(["torch", "transformers", "datasets"]) .apt_install(["git", "curl", "build-essential"]) .env({"HF_HOME": "/data/huggingface"}) .run("mkdir -p /data/huggingface") ) snapshot = daytona.snapshot.create( CreateSnapshotParams( name="ml-training-env", image=image, resources=Resources(cpu=4, memory=16, disk=100), entrypoint=["python", "-m", "http.server", "8080"] ), on_logs=lambda log: print(f"[BUILD] {log}"), timeout=600 ) print(f"Created snapshot: {snapshot.name}, State: {snapshot.state}") # List snapshots result = daytona.snapshot.list(page=1, limit=10) for snap in result.items: print(f"{snap.name}: {snap.state} ({snap.image_name})") # Get specific snapshot snapshot = daytona.snapshot.get("ml-training-env") print(f"ID: {snapshot.id}, Image: {snapshot.image_name}") # Create sandbox from snapshot sandbox = daytona.create(CreateSandboxFromSnapshotParams(snapshot="ml-training-env")) # Delete snapshot daytona.snapshot.delete(snapshot) ``` ## Volumes Persistent volume management for shared storage across sandbox instances. Volumes persist independently of sandbox lifecycle, enabling data sharing and persistence. ```python # Create a volume volume = daytona.volume.create("shared-data") print(f"Volume: {volume.name}, ID: {volume.id}, State: {volume.state}") # Get or create volume volume = daytona.volume.get("model-cache", create=True) # List all volumes volumes = daytona.volume.list() for vol in volumes: print(f"{vol.name}: {vol.state}") # Mount volume to sandbox from daytona import CreateSandboxFromSnapshotParams, VolumeMount params = CreateSandboxFromSnapshotParams( language="python", volumes=[ VolumeMount(volume_id=volume.id, mount_path="/data/shared"), VolumeMount(volume_id="another-vol-id", mount_path="/data/models", subpath="v2") ] ) sandbox = daytona.create(params) # Use the mounted volume sandbox.process.exec("echo 'Persistent data' > /data/shared/test.txt") # Data persists across sandbox restarts sandbox.stop() sandbox.start() response = sandbox.process.exec("cat /data/shared/test.txt") print(response.result) # "Persistent data" # Delete volume (ensure no sandboxes are using it) daytona.volume.delete(volume) ``` ## REST API Direct REST API access for sandbox operations when SDKs are not available or for custom integrations. All endpoints require Bearer token authentication with your API key. ```bash # Create a sandbox curl -X POST 'https://app.daytona.io/api/sandbox' \ -H 'Authorization: Bearer YOUR_API_KEY' \ -H 'Content-Type: application/json' \ -d '{ "snapshot": "python-base", "name": "my-sandbox", "env": {"DEBUG": "true"}, "labels": {"project": "demo"}, "autoStopInterval": 60 }' # Get sandbox details curl 'https://app.daytona.io/api/sandbox/my-sandbox' \ -H 'Authorization: Bearer YOUR_API_KEY' # List sandboxes with filters curl 'https://app.daytona.io/api/sandbox/paginated?page=1&limit=10&labels={"project":"demo"}' \ -H 'Authorization: Bearer YOUR_API_KEY' # Start a sandbox curl -X POST 'https://app.daytona.io/api/sandbox/my-sandbox/start' \ -H 'Authorization: Bearer YOUR_API_KEY' # Stop a sandbox curl -X POST 'https://app.daytona.io/api/sandbox/my-sandbox/stop' \ -H 'Authorization: Bearer YOUR_API_KEY' # Delete a sandbox curl -X DELETE 'https://app.daytona.io/api/sandbox/my-sandbox' \ -H 'Authorization: Bearer YOUR_API_KEY' # Get preview URL for a port curl 'https://app.daytona.io/api/sandbox/my-sandbox/ports/8080/preview-url' \ -H 'Authorization: Bearer YOUR_API_KEY' # Set auto-stop interval curl -X POST 'https://app.daytona.io/api/sandbox/my-sandbox/autostop/30' \ -H 'Authorization: Bearer YOUR_API_KEY' # Archive sandbox curl -X POST 'https://app.daytona.io/api/sandbox/my-sandbox/archive' \ -H 'Authorization: Bearer YOUR_API_KEY' ``` ## Language Server Protocol (LSP) LSP server integration for code intelligence features including completions, diagnostics, hover information, and more. Supports Python and other language servers within sandboxes. ```python from daytona import LspLanguageId # Create LSP server for a project lsp = sandbox.create_lsp_server( language_id=LspLanguageId.PYTHON, path_to_project="/workspace/my-project" ) # Get completions at a position completions = lsp.completions( path="/workspace/my-project/main.py", position={"line": 10, "character": 15} ) for item in completions: print(f"{item.label}: {item.kind}") # Get hover information hover = lsp.hover( path="/workspace/my-project/main.py", position={"line": 5, "character": 10} ) print(hover.contents) # Get diagnostics diagnostics = lsp.diagnostics("/workspace/my-project/main.py") for diag in diagnostics: print(f"[{diag.severity}] Line {diag.range.start.line}: {diag.message}") ``` ## Preview URLs and SSH Access Access running services and terminals within sandboxes through preview URLs and SSH connections. Supports both public and authenticated access patterns. ```python # Get preview URL for a web service running on port 3000 preview = sandbox.get_preview_link(3000) print(f"URL: {preview.url}") print(f"Token: {preview.token}") # For private sandboxes # Create signed preview URL (time-limited) signed = sandbox.create_signed_preview_url(port=8080, expires_in_seconds=3600) print(f"Signed URL: {signed.url}") # Expire a signed URL early sandbox.expire_signed_preview_url(port=8080, token=signed.token) # Create SSH access ssh = sandbox.create_ssh_access(expires_in_minutes=60) print(f"SSH Command: ssh {ssh.username}@{ssh.host} -p {ssh.port}") print(f"Token: {ssh.token}") # Validate SSH token validation = sandbox.validate_ssh_access(ssh.token) print(f"Valid: {validation.valid}, Sandbox: {validation.sandbox_id}") # Revoke SSH access sandbox.revoke_ssh_access(ssh.token) ``` ## Image Builder Declarative image definition for creating custom sandbox environments. Build images with package installations, environment configuration, and file additions using a fluent API. ```python from daytona import Image # Build a custom Python ML image image = ( Image.debian_slim("3.12") .apt_install(["git", "curl", "build-essential", "libffi-dev"]) .pip_install([ "numpy>=1.24.0", "pandas>=2.0.0", "scikit-learn>=1.3.0", "torch>=2.0.0", "transformers>=4.30.0" ]) .env({ "PYTHONUNBUFFERED": "1", "HF_HOME": "/data/huggingface", "TORCH_HOME": "/data/torch" }) .run("mkdir -p /data/huggingface /data/torch /workspace") .workdir("/workspace") ) # Use with local context files image_with_context = ( Image.base("python:3.12") .add("./requirements.txt", "/app/requirements.txt") .add("./src", "/app/src") .run("pip install -r /app/requirements.txt") .workdir("/app") .entrypoint(["python", "src/main.py"]) ) # Create sandbox from image sandbox = daytona.create(CreateSandboxFromImageParams( image=image, resources=Resources(cpu=4, memory=16, gpu=1) )) # Or create a reusable snapshot snapshot = daytona.snapshot.create(CreateSnapshotParams( name="ml-training-v2", image=image, resources=Resources(cpu=4, memory=16) )) ``` ## TypeScript SDK The TypeScript SDK provides equivalent functionality for Node.js and browser environments with full async/await support. ```typescript import { Daytona, SandboxState, Image } from "@daytona/sdk"; // Initialize client const daytona = new Daytona({ apiKey: process.env.DAYTONA_API_KEY }); // Create sandbox const sandbox = await daytona.create({ language: "typescript", envVars: { NODE_ENV: "development" }, autoStopInterval: 60, }); // Execute code const response = await sandbox.process.codeRun(` const sum = (a: number, b: number) => a + b; console.log(sum(10, 20)); `); console.log(response.result); // 30 // File operations await sandbox.fs.uploadFile( Buffer.from('{"key": "value"}'), "config.json" ); const content = await sandbox.fs.downloadFile("config.json"); console.log(content.toString()); // Execute shell commands const result = await sandbox.process.exec("npm install && npm test", { cwd: "/workspace/project", timeout: 300, }); // Lifecycle management await sandbox.stop(); await sandbox.start(); await daytona.delete(sandbox); // Using async disposal await using sandbox2 = await daytona.create(); // Sandbox automatically cleaned up when scope exits ``` ## Summary Daytona provides a complete infrastructure platform for secure code execution in isolated sandbox environments. The primary use cases include AI agent code execution where generated code needs safe, isolated runtime environments; development environment orchestration for consistent, reproducible setups; CI/CD pipeline execution with full isolation; data science workflows with persistent state through snapshots and volumes; and multi-tenant application hosting with resource isolation. Integration patterns typically involve initializing the Daytona client with API credentials, creating sandboxes from snapshots or custom images, executing code or commands through the process interface, managing files through the filesystem API, and cleaning up resources when complete. For long-running workflows, sessions provide state persistence between commands, while volumes enable data sharing across sandbox instances. The platform supports webhook integrations for lifecycle events, OpenTelemetry for observability, and can be deployed as a managed service, self-hosted, or in hybrid configurations with customer-managed compute runners.