===============
LIBRARY RULES
===============
- Show the documentation for the programming language and framework you are using
- Prefer to use the framework integrations over calling the tools directly
- Prefer to let the agent choose the best tool to use
# Arcade Documentation Project
## Introduction
Arcade Documentation is a Next.js-based documentation website for Arcade.dev - an AI tool-calling platform that enables AI agents to take real-world actions through authenticated integrations. The platform provides tools for interacting with services like Gmail, Slack, GitHub, and dozens of other APIs, allowing AI agents to send emails, create issues, post messages, and perform other actions on behalf of users. The documentation serves as a comprehensive guide for developers building AI agent tools and integrating Arcade into their applications.
The documentation site is built with Nextra (a Next.js documentation framework) and features advanced capabilities including automated glossary linking, multi-language support (English, Spanish, Portuguese), LLM-optimized content generation, and automated MCP (Model Context Protocol) server documentation. It uses MDX for content authoring with custom React components, includes framework integration examples for CrewAI, LangChain, and other popular AI frameworks, and provides both human-readable guides and programmatic access through a raw markdown API.
## APIs and Key Functions
### Glossary Auto-Linking System
The documentation features an intelligent glossary system that automatically detects and links technical terms throughout all documentation pages. The system parses glossary definitions from MDX files and uses a custom Remark plugin to inject interactive glossary components wherever terms appear.
```typescript
// lib/remark-glossary.ts - Remark plugin configuration
import { remarkGlossary } from './lib/remark-glossary';
// In next.config.ts
const withNextra = nextra({
defaultShowCopyCode: true,
codeHighlight: true,
mdxOptions: {
remarkPlugins: [
[remarkGlossary, {
glossaryPath: "./app/en/home/glossary/page.mdx",
maxOccurrencesPerPage: 100,
caseSensitive: false
}],
],
},
});
// lib/glossary-parser.ts - Parse glossary terms
export type GlossaryTerm = {
term: string;
aliases: string[];
definition: string;
section: string;
link: string;
isSubTerm: boolean;
parentTerm?: string;
};
export function parseGlossary(glossaryPath: string): GlossaryTerm[] {
const content = fs.readFileSync(glossaryPath, 'utf-8');
const terms: GlossaryTerm[] = [];
// Parses MDX headings and extracts term definitions
// Handles aliases in format: "Term (Alias1, Alias2)"
// Returns sorted list for longest-first matching
return sortTermsByLength(terms);
}
// Automatically transforms markdown like:
// "An AI agent uses tools to complete tasks"
// Into:
// "An AI agent uses
// tools to complete tasks"
```
### LLMs.txt Generation Plugin
Generates an llms.txt file following the llms.txt specification to help LLMs efficiently navigate and understand the documentation structure. The plugin uses OpenAI to generate summaries of each page and organizes them into a structured format.
```typescript
// scripts/generate-llmstxt.ts - Generate LLM-optimized navigation
import OpenAI from 'openai';
import fg from 'fast-glob';
type PageMetadata = {
path: string;
url: string;
};
type Section = {
name: string;
pages: Array<{ title: string; url: string; description: string }>;
};
async function discoverPages(): Promise {
const files = await fg(['app/en/**/page.mdx'], {
ignore: ['**/node_modules/**'],
});
return files.map(path => ({
path,
url: path
.replace('app', '')
.replace('/page.mdx', '')
.replace(/\/index$/, ''),
}));
}
async function summarizePage(page: PageMetadata): Promise<{ title: string; description: string }> {
const content = await fs.readFile(page.path, 'utf-8');
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{
role: 'system',
content: 'Generate a concise title and one-sentence description for this documentation page.',
},
{ role: 'user', content },
],
response_format: { type: 'json_object' },
});
return JSON.parse(response.choices[0].message.content);
}
function generateLlmsTxt(sections: Section[]): string {
let output = '# Arcade Documentation\n\n';
output += '> Documentation for Arcade.dev - AI tool-calling platform\n\n';
for (const section of sections) {
output += `## ${section.name}\n\n`;
for (const page of section.pages) {
output += `- [${page.title}](${page.url}): ${page.description}\n`;
}
output += '\n';
}
return output;
}
// Usage:
// pnpm llmstxt
// Or automatically during build via next-plugin-llmstxt.ts
```
### Markdown API Route
Provides programmatic access to documentation pages in raw markdown format, enabling external tools and AI systems to retrieve documentation content without HTML rendering.
```typescript
// app/api/markdown/[[...slug]]/route.ts
import { NextRequest, NextResponse } from 'next/server';
import fs from 'fs/promises';
import path from 'path';
export async function GET(
request: NextRequest,
context: { params: Promise<{ slug?: string[] }> }
) {
const { slug = [] } = await context.params;
// Convert URL like /en/home/quickstart.md to file path
const filePath = path.join(
process.cwd(),
'app',
...slug.slice(0, -1),
slug[slug.length - 1].replace('.md', ''),
'page.mdx'
);
try {
const content = await fs.readFile(filePath, 'utf-8');
return new NextResponse(content, {
status: 200,
headers: {
'Content-Type': 'text/plain; charset=utf-8',
'Cache-Control': 'public, max-age=3600',
},
});
} catch (error) {
return new NextResponse('Not Found', { status: 404 });
}
}
// Example usage:
// curl https://docs.arcade.dev/en/home/quickstart.md
// Returns raw markdown content of the quickstart page
// curl https://docs.arcade.dev/en/mcp-servers/productivity/gmail.md
// Returns raw markdown for Gmail MCP server documentation
```
### Internationalization Proxy Middleware
Handles automatic locale detection, URL routing, and redirects for the multi-language documentation site. Detects user language from browser settings and cookies, then routes them to the appropriate localized content.
```typescript
// proxy.ts - Middleware for i18n routing
import { NextRequest, NextResponse } from 'next/server';
const i18n = {
defaultLocale: 'en',
locales: ['en', 'es', 'pt'],
};
function parseAcceptLanguageHeader(acceptLanguage: string): string[] {
return acceptLanguage
.split(',')
.map(lang => {
const [locale, q = 'q=1'] = lang.trim().split(';');
const quality = parseFloat(q.split('=')[1]);
return { locale: locale.toLowerCase(), quality };
})
.sort((a, b) => b.quality - a.quality)
.map(({ locale }) => locale.split('-')[0]);
}
function getPreferredLocale(request: NextRequest): string {
const cookieLocale = request.cookies.get('NEXT_LOCALE')?.value;
if (cookieLocale && i18n.locales.includes(cookieLocale)) {
return cookieLocale;
}
const acceptLanguage = request.headers.get('accept-language');
if (acceptLanguage) {
const browserLocales = parseAcceptLanguageHeader(acceptLanguage);
for (const locale of browserLocales) {
if (i18n.locales.includes(locale)) {
return locale;
}
}
}
return i18n.defaultLocale;
}
function pathnameIsMissingLocale(pathname: string): boolean {
return i18n.locales.every(
locale => !pathname.startsWith(`/${locale}/`) && pathname !== `/${locale}`
);
}
export function proxy(request: NextRequest) {
const { pathname } = request.nextUrl;
// Handle .md requests - rewrite to API route
if (pathname.endsWith('.md')) {
const mdPath = pathname.replace(/\.md$/, '');
return NextResponse.rewrite(new URL(`/api/markdown${mdPath}`, request.url));
}
// Handle missing locale prefix
if (pathnameIsMissingLocale(pathname)) {
const locale = getPreferredLocale(request);
return NextResponse.redirect(
new URL(`/${locale}${pathname}`, request.url)
);
}
// Handle legacy redirects
if (pathname.includes('/toolkits')) {
const newPath = pathname.replace('/toolkits', '/mcp-servers');
return NextResponse.redirect(new URL(newPath, request.url), 301);
}
return NextResponse.next();
}
// Usage examples:
// User visits /home -> Redirects to /en/home (if browser is English)
// User visits /home -> Redirects to /es/home (if browser is Spanish)
// User visits /en/home/quickstart.md -> Rewrites to /api/markdown/en/home/quickstart
```
### MCP Server Documentation Generator
Python-based automated documentation generator that introspects MCP server Python packages and creates comprehensive MDX documentation with AI-generated code examples. Discovers tools using the @tool decorator, extracts schemas, and generates multi-language examples.
```python
# make_toolkit_docs/__main__.py - Interactive CLI for generating docs
from rich.console import Console
from InquirerPy import inquirer
from openai import OpenAI
import subprocess
import json
def generate_mcp_server_docs(
console: Console,
toolkit_name: str,
toolkit_dir: str,
docs_section: str,
docs_dir: str,
openai_model: str,
openai_api_key: str | None = None,
tool_call_examples: bool = True,
debug: bool = False,
max_concurrency: int = 5,
tools: list | None = None,
) -> bool:
"""Generate comprehensive MCP server documentation with examples."""
# Install the package
console.print(f"[cyan]Installing {toolkit_name}...[/cyan]")
subprocess.run(["uv", "pip", "install", "-e", toolkit_dir], check=True)
# Introspect tools using the @tool decorator
console.print(f"[cyan]Discovering tools in {toolkit_name}...[/cyan]")
discovered_tools = introspect_toolkit(toolkit_name)
if tools:
discovered_tools = [t for t in discovered_tools if t['name'] in tools]
# Generate documentation with OpenAI
client = OpenAI(api_key=openai_api_key)
for tool in discovered_tools:
console.print(f"[cyan]Generating examples for {tool['name']}...[/cyan]")
# Generate Python example
python_example = client.chat.completions.create(
model=openai_model,
messages=[
{"role": "system", "content": "Generate a Python code example for this tool."},
{"role": "user", "content": json.dumps(tool['schema'])},
],
).choices[0].message.content
# Generate JavaScript example
js_example = client.chat.completions.create(
model=openai_model,
messages=[
{"role": "system", "content": "Generate a JavaScript code example for this tool."},
{"role": "user", "content": json.dumps(tool['schema'])},
],
).choices[0].message.content
tool['examples'] = {
'python': python_example,
'javascript': js_example,
}
# Build MDX documentation
mdx_content = build_mdx_documentation(toolkit_name, discovered_tools, docs_section)
# Save to docs directory
output_path = f"{docs_dir}/{docs_section}/{toolkit_name.lower()}/page.mdx"
with open(output_path, 'w') as f:
f.write(mdx_content)
console.print(f"[green]✓ Documentation saved to {output_path}[/green]")
return True
def run() -> None:
"""Interactive CLI for generating MCP server documentation."""
console = Console()
console.print("[bold cyan]MCP Server Documentation Generator[/bold cyan]")
# Discover available toolkits
toolkit_dirs = discover_toolkit_directories()
# Interactive prompts
toolkit_choice = inquirer.select(
message="Select a toolkit to document:",
choices=toolkit_dirs,
).execute()
docs_section = inquirer.select(
message="Select documentation section:",
choices=["productivity", "development", "data", "communication"],
).execute()
generate_examples = inquirer.confirm(
message="Generate code examples with OpenAI?",
default=True,
).execute()
# Generate documentation
success = generate_mcp_server_docs(
console=console,
toolkit_name=toolkit_choice['name'],
toolkit_dir=toolkit_choice['path'],
docs_section=docs_section,
docs_dir="./app/en/mcp-servers",
openai_model="gpt-4o",
tool_call_examples=generate_examples,
)
if success:
console.print("[bold green]✓ Documentation generated successfully![/bold green]")
# Usage:
# make mcp-server-docs
# Or: cd make_toolkit_docs && uv sync && uv run python __main__.py
# Example output structure:
# app/en/mcp-servers/productivity/gmail/page.mdx
# - Tool list with descriptions
# - Installation instructions
# - Configuration examples
# - Python and JavaScript code examples for each tool
# - Authentication setup guides
```
### Arcade Tool Execution API
Core Python SDK for executing Arcade tools, handling authentication, and getting formatted tool definitions for various AI frameworks. Supports direct tool calling and integration with OpenAI, Anthropic, and other LLM providers.
```python
# examples/code/home/use-tools/call-tools-directly/quickstart.py
from arcadepy import Arcade
# Initialize client
client = Arcade(api_key="arc_abc123xyz")
user_id = "user_12345"
# Execute a simple tool (no auth required)
response = client.tools.execute(
tool_name="Math.Sqrt",
input={"a": "625"},
user_id=user_id,
)
print(f"Result: {response.output.value}")
# Output: Result: 25
# Execute an authenticated tool (requires user authorization)
# Step 1: Start authorization flow
auth_response = client.tools.authorize(
tool_name="GitHub.SetStarred",
user_id=user_id,
)
if auth_response.status != "completed":
print(f"Authorize at: {auth_response.url}")
# Wait for user to complete OAuth flow
client.auth.wait_for_completion(auth_response.id)
# Step 2: Execute the tool
response = client.tools.execute(
tool_name="GitHub.SetStarred",
input={
"owner": "ArcadeAI",
"name": "arcade-ai",
"starred": True,
},
user_id=user_id,
)
print(f"Status: {response.status}")
# Output: Status: completed
# Get formatted tool definitions for OpenAI
all_tools = list(client.tools.formatted.list(format="openai"))
print(f"Available tools: {len(all_tools)}")
# Output: Available tools: 247
# Get specific tools for a task
gmail_tools = list(client.tools.formatted.list(
format="openai",
tool_name=["Gmail.ListEmails", "Gmail.SendEmail", "Gmail.SearchEmails"]
))
# Use with OpenAI
from openai import OpenAI
openai_client = OpenAI()
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Get my 5 most recent emails and summarize them"}
],
tools=gmail_tools,
)
# Execute tool calls returned by OpenAI
for tool_call in response.choices[0].message.tool_calls:
result = client.tools.execute(
tool_name=tool_call.function.name,
input=json.loads(tool_call.function.arguments),
user_id=user_id,
)
print(result.output.value)
```
### Building Custom MCP Servers
Framework for building custom MCP servers with authentication, secrets management, and tool definitions. Supports OAuth integrations, API key handling, and context management for tools.
```python
# examples/code/home/build-tools/create-a-mcp-server/quickstart.py
from arcade_mcp_server import Context, MCPApp
from arcade_mcp_server.auth import Reddit, OAuth2Config
from typing import Annotated
import sys
# Initialize MCP server
app = MCPApp(
name="my_custom_server",
version="1.0.0",
log_level="DEBUG"
)
# Simple tool (no auth required)
@app.tool
def greet(name: Annotated[str, "The name of the person to greet"]) -> str:
"""Greet a person by name."""
return f"Hello, {name}!"
# Tool with secrets (API keys, connection strings, etc.)
@app.tool(requires_secrets=["DATABASE_URL", "API_KEY"])
def query_database(
context: Context,
query: Annotated[str, "SQL query to execute"]
) -> list[dict]:
"""Execute a database query using stored credentials."""
db_url = context.get_secret("DATABASE_URL")
api_key = context.get_secret("API_KEY")
# Use secrets to connect and execute query
results = execute_query(db_url, query, api_key)
return results
# Tool with OAuth authentication
@app.tool(requires_auth=Reddit(scopes=["read", "identity"]))
async def get_reddit_posts(
context: Context,
subreddit: Annotated[str, "The subreddit name"]
) -> dict:
"""Fetch posts from a specific subreddit."""
oauth_token = context.get_auth_token_or_empty()
headers = {
"Authorization": f"Bearer {oauth_token}",
"User-Agent": "my-app/1.0"
}
response = await httpx.get(
f"https://oauth.reddit.com/r/{subreddit}/hot",
headers=headers
)
return response.json()
# Custom OAuth provider
custom_auth = OAuth2Config(
provider_name="CustomAPI",
authorization_url="https://api.example.com/oauth/authorize",
token_url="https://api.example.com/oauth/token",
scopes=["read:data", "write:data"]
)
@app.tool(requires_auth=custom_auth)
async def custom_api_call(
context: Context,
endpoint: Annotated[str, "API endpoint to call"]
) -> dict:
"""Make an authenticated call to custom API."""
token = context.get_auth_token_or_empty()
headers = {"Authorization": f"Bearer {token}"}
response = await httpx.get(f"https://api.example.com/{endpoint}", headers=headers)
return response.json()
# Run the server
if __name__ == "__main__":
transport = sys.argv[1] if len(sys.argv) > 1 else "stdio"
app.run(
transport=transport,
host="127.0.0.1",
port=8000
)
# Usage:
# stdio mode (for Claude Desktop, etc.):
# python my_server.py stdio
# HTTP mode (for web integrations):
# python my_server.py http
# Configuration in Claude Desktop:
# {
# "mcpServers": {
# "my_custom_server": {
# "command": "python",
# "args": ["/path/to/my_server.py", "stdio"],
# "env": {
# "DATABASE_URL": "postgresql://...",
# "API_KEY": "sk_..."
# }
# }
# }
# }
```
### CrewAI Framework Integration
Integration with CrewAI for building multi-agent systems that use Arcade tools. Provides seamless tool management and execution within CrewAI agents and crews.
```python
# examples/code/home/crewai/use_arcade_tools.py
from crewai import Agent, Crew, Task
from crewai.llm import LLM
from crewai_arcade import ArcadeToolManager
# Initialize Arcade tool manager
manager = ArcadeToolManager(
default_user_id="user_12345",
api_key="arc_abc123xyz"
)
# Get specific tools for your agent
gmail_tools = manager.get_tools(tools=[
"Gmail.ListEmails",
"Gmail.SendEmail",
"Gmail.SearchEmails"
])
slack_tools = manager.get_tools(tools=[
"Slack.SendMessage",
"Slack.ListChannels"
])
# Create agent with Arcade tools
email_agent = Agent(
role="Email Assistant",
backstory="You are an expert at managing emails and communication",
goal="Help users manage their email inbox efficiently",
tools=gmail_tools,
allow_delegation=False,
verbose=True,
llm=LLM(model="gpt-4o"),
)
notification_agent = Agent(
role="Notification Manager",
backstory="You send important updates to Slack channels",
goal="Keep the team informed via Slack",
tools=slack_tools,
allow_delegation=False,
verbose=True,
llm=LLM(model="gpt-4o"),
)
# Create tasks
email_task = Task(
description="Get the 5 most recent emails and create a summary with sender, subject, and key points",
expected_output="A bulleted list with email summaries",
agent=email_agent,
tools=email_agent.tools,
)
notification_task = Task(
description="Send a summary of the emails to the #general Slack channel",
expected_output="Confirmation message that notification was sent",
agent=notification_agent,
tools=notification_agent.tools,
context=[email_task], # Depends on email_task output
)
# Create and run crew
crew = Crew(
agents=[email_agent, notification_agent],
tasks=[email_task, notification_task],
verbose=True,
memory=True,
)
result = crew.kickoff()
print(result)
# Example output:
# Email Summary:
# - support@example.com: "Bug Report" - User experiencing login issues
# - boss@company.com: "Q4 Planning" - Meeting scheduled for next week
# - newsletter@tech.com: "Weekly Digest" - Latest tech news and updates
#
# Notification sent to #general:
# "Email Summary - 5 new emails processed, 1 requires urgent attention"
```
### LangChain Framework Integration
Integration with LangChain for building AI applications with Arcade tools. Supports chains, agents, and retrieval-augmented generation patterns.
```python
# Example based on documentation patterns
from langchain.agents import initialize_agent, AgentType
from langchain.llms import OpenAI
from langchain_arcade import ArcadeToolkit
# Initialize Arcade toolkit
toolkit = ArcadeToolkit(
user_id="user_12345",
api_key="arc_abc123xyz"
)
# Get tools for specific capabilities
tools = toolkit.get_tools(
tool_names=[
"Gmail.ListEmails",
"Gmail.SendEmail",
"Google.SearchWeb",
"GitHub.CreateIssue"
]
)
# Create LangChain agent with Arcade tools
llm = OpenAI(model="gpt-4o", temperature=0)
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
)
# Run agent
response = agent.run(
"Search for recent Python security vulnerabilities, "
"check if any are mentioned in my emails, "
"and create GitHub issues for any critical ones"
)
print(response)
# Example with LCEL (LangChain Expression Language)
from langchain.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnablePassthrough
# Create a chain that uses Arcade tools
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that can send emails and search the web."),
("user", "{input}")
])
chain = (
{"input": RunnablePassthrough()}
| prompt
| llm.bind_tools(tools)
| toolkit.execute_tool_calls # Execute any tool calls returned by the LLM
)
result = chain.invoke("Find the latest news on AI and email a summary to team@company.com")
```
### Building SQL Database Tools
Creating custom tools that interact with SQL databases using Arcade's context and secrets management.
```python
# examples/code/home/build-tools/create-a-tool/sql-tool.py
from arcade_mcp_server import tool, ToolContext
from typing import Annotated
from sqlalchemy import create_engine, text
import pandas as pd
def _get_engine(connection_string: str):
"""Create database engine from connection string."""
return create_engine(connection_string)
def _get_tables(engine, schema_name: str) -> list[str]:
"""Query database for table names."""
query = text("""
SELECT table_name
FROM information_schema.tables
WHERE table_schema = :schema
ORDER BY table_name
""")
with engine.connect() as conn:
result = conn.execute(query, {"schema": schema_name})
return [row[0] for row in result]
def _execute_query(engine, query: str) -> list[dict]:
"""Execute SQL query and return results as list of dicts."""
with engine.connect() as conn:
result = conn.execute(text(query))
columns = result.keys()
return [dict(zip(columns, row)) for row in result]
@tool(requires_secrets=["DATABASE_CONNECTION_STRING"])
def discover_tables(
context: ToolContext,
schema_name: Annotated[str, "The database schema to discover tables in"] = "public"
) -> list[str]:
"""Discover all the tables in the database schema."""
connection_string = context.get_secret("DATABASE_CONNECTION_STRING")
engine = _get_engine(connection_string)
tables = _get_tables(engine, schema_name)
return tables
@tool(requires_secrets=["DATABASE_CONNECTION_STRING"])
def describe_table(
context: ToolContext,
table_name: Annotated[str, "The name of the table to describe"],
schema_name: Annotated[str, "The schema containing the table"] = "public"
) -> dict:
"""Get the schema information for a specific table."""
connection_string = context.get_secret("DATABASE_CONNECTION_STRING")
engine = _get_engine(connection_string)
query = text("""
SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_schema = :schema AND table_name = :table
ORDER BY ordinal_position
""")
result = _execute_query(engine, query)
return {
"table": table_name,
"schema": schema_name,
"columns": result
}
@tool(requires_secrets=["DATABASE_CONNECTION_STRING"])
def execute_query(
context: ToolContext,
query: Annotated[str, "The SQL query to execute"]
) -> list[dict]:
"""Execute a SQL query and return the results."""
connection_string = context.get_secret("DATABASE_CONNECTION_STRING")
engine = _get_engine(connection_string)
return _execute_query(engine, query)
@tool(requires_secrets=["DATABASE_CONNECTION_STRING"])
def analyze_table(
context: ToolContext,
table_name: Annotated[str, "The table to analyze"],
schema_name: Annotated[str, "The schema containing the table"] = "public"
) -> dict:
"""Get statistical analysis of a table."""
connection_string = context.get_secret("DATABASE_CONNECTION_STRING")
engine = _get_engine(connection_string)
# Get row count
count_query = text(f'SELECT COUNT(*) FROM "{schema_name}"."{table_name}"')
with engine.connect() as conn:
row_count = conn.execute(count_query).scalar()
# Get sample data
sample_query = text(f'SELECT * FROM "{schema_name}"."{table_name}" LIMIT 5')
sample_data = _execute_query(engine, sample_query)
return {
"table": table_name,
"row_count": row_count,
"sample_rows": sample_data
}
# Usage in an MCP server:
# from arcade_mcp_server import MCPApp
# app = MCPApp(name="sql_tools", version="1.0.0")
# app.tool(discover_tables)
# app.tool(describe_table)
# app.tool(execute_query)
# app.tool(analyze_table)
#
# if __name__ == "__main__":
# app.run(transport="stdio")
```
## Summary and Integration Patterns
The Arcade documentation project demonstrates sophisticated patterns for building AI-friendly developer documentation. The automated glossary linking system ensures consistent terminology across hundreds of pages without manual tagging, while the LLMs.txt generation provides LLMs with an optimized navigation structure. The MCP server documentation generator showcases how AI can be used to enhance documentation quality by automatically generating realistic code examples in multiple programming languages, reducing the manual effort required to maintain comprehensive examples. These features work together to create documentation that serves both human developers reading in browsers and AI agents accessing content programmatically.
Integration patterns in the Arcade ecosystem emphasize flexibility and framework compatibility. The core SDK provides direct tool execution with OAuth handling, while framework integrations (CrewAI, LangChain) offer idiomatic ways to incorporate Arcade tools into existing AI application architectures. Custom MCP server development is streamlined through the arcade_mcp_server library, which handles authentication flows, secrets management, and context injection automatically. The documentation itself serves as both a guide and reference implementation, showing how to build production-ready AI agent tools with proper error handling, type safety, and security best practices. Whether building simple utility tools or complex multi-tool integrations, the patterns demonstrated enable developers to quickly create reliable AI agent capabilities.