### Install and Run Project
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/examples/elevenlabs-nextjs/README.md
Installs project dependencies using pnpm and starts the development server. The application can then be accessed at http://localhost:3000.
```bash
pnpm install
pnpm dev
```
--------------------------------
### Start Documentation Server
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/CLAUDE.md
Starts the local development server for the documentation site, allowing for live preview and testing of changes.
```bash
pnpm run dev
```
--------------------------------
### Install ElevenLabs SDK
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/quickstart.mdx
Instructions for installing the ElevenLabs SDK for Python or Node.js. May require additional system dependencies like MPV or ffmpeg for audio playback.
```shell
pip install elevenlabs
```
```shell
npm install elevenlabs
```
--------------------------------
### Install ElevenLabs SDK
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/text-to-speech/streaming.mdx
Installs the ElevenLabs SDK and necessary environment variable management libraries for Python and Node.js projects.
```bash Python
pip install elevenlabs
pip install python-dotenv
```
```bash TypeScript
npm install @elevenlabs/elevenlabs-js
npm install dotenv
npm install @types/dotenv --save-dev
```
--------------------------------
### SDK Generation Process Example
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/CLAUDE.md
Outlines the steps involved in generating SDKs from an OpenAPI specification, including updating and previewing.
```bash
# 1. Backend deploys with updated OpenAPI spec
# 2. Update `openapi.json` with `pnpm run openapi:latest`
# 3. Validate with `fern check` and preview with `fern generate --group python-sdk --preview`
# 4. Apply overrides in `openapi-overrides.yml` if needed
# 5. Trigger GitHub Actions for SDK releases (ElevenLabs employees only)
```
--------------------------------
### Setup Environment Variables
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/examples/elevenlabs-nextjs/README.md
Copies the example environment file to be used for configuration. It requires setting the ELEVENLABS_API_KEY and IRON_SESSION_SECRET_KEY.
```bash
cp .env.example .env
```
--------------------------------
### Run Generated Code
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/quickstart.mdx
Commands to execute the generated Python or TypeScript example files.
```shell
python example.py
```
```shell
npx tsx example.mts
```
--------------------------------
### Code Snippet Workflow Example
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/CLAUDE.md
Illustrates the typical workflow for creating, testing, and generating code snippets for documentation.
```bash
# 1. Create: Write examples in /examples/snippets/python/ and /examples/snippets/node/
# 2. Test: Run `pnpm run snippets:test` and `pnpm run snippets:typecheck`
# 3. Generate: Run `pnpm run snippets:generate` to create MDX files
# 4. Use: Import generated MDX into documentation with ``
```
--------------------------------
### Install Dependencies
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/CLAUDE.md
Installs project dependencies using PNPM, the package manager for the monorepo.
```bash
pnpm install
```
--------------------------------
### Next.js Project Setup and ElevenLabs Integration
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/guides/nextjs.mdx
Steps to create a new Next.js project, navigate into its directory, install the ElevenLabs React dependency, and start the development server. This setup enables real-time voice conversations with ElevenLabs AI agents.
```bash
npm create next-app my-conversational-agent
```
```bash
cd my-conversational-agent
```
```bash
npm install @elevenlabs/react
```
```bash
npm run dev
```
--------------------------------
### Quick Start Commands
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/libraries/cli.mdx
A quick guide to initialize a new project, log in to ElevenLabs, create your first agent, and synchronize local configurations with the platform.
```bash
convai init
```
```bash
convai login
```
```bash
convai add agent "My Assistant" --template assistant
```
```bash
convai sync
```
--------------------------------
### Execute the code
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/voice-changer/quickstart.mdx
Instructions on how to execute the Python and TypeScript code examples to hear the transformed voice.
```python
python example.py
```
```typescript
npx tsx example.mts
```
--------------------------------
### Supportive Conversation Assistant Example
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/best-practices/prompting-guide.mdx
An example of a supportive conversation assistant's structure, highlighting its tone, goal, guardrails, and tools. This snippet demonstrates how the assistant should interact with customers in a sales context.
```mdx
```mdx title="Example: Supportive conversation assistant" maxLines=75
# Tone
Your responses are warm, helpful, and concise, typically 2-3 sentences to maintain clarity and engagement.
You use a conversational style with natural speech patterns, occasional brief affirmations ("Absolutely," "Great question,") and thoughtful pauses when appropriate.
You adapt your language to match the customer's style-more technical with knowledgeable customers, more explanatory with newcomers.
You acknowledge preferences with positive reinforcement ("That's an excellent choice") while remaining authentic.
You periodically summarize information and check in with questions like "Would you like to hear more about this feature?" or "Does this sound like what you're looking for?"
# Goal
Your primary goal is to guide customers toward optimal purchasing decisions through a consultative sales approach:
1. Customer needs assessment:
- Identify key buying factors (budget, primary use case, features, timeline, constraints)
- Explore underlying motivations beyond stated requirements
- Determine decision-making criteria and relative priorities
- Clarify any unstated expectations or assumptions
- For replacement purchases: Document pain points with current product
2. Solution matching framework:
- If budget is prioritized: Begin with value-optimized options before premium offerings
- If feature set is prioritized: Focus on technical capabilities matching specific requirements
- If brand reputation is emphasized: Highlight quality metrics and customer satisfaction data
- For comparison shoppers: Provide objective product comparisons with clear differentiation points
- For uncertain customers: Present a good-better-best range of options with clear tradeoffs
3. Objection resolution process:
- For price concerns: Explain value-to-cost ratio and long-term benefits
- For feature uncertainties: Provide real-world usage examples and benefits
- For compatibility issues: Verify integration with existing systems before proceeding
- For hesitation based on timing: Offer flexible scheduling or notify about upcoming promotions
- Document objections to address proactively in future interactions
4. Purchase facilitation:
- Guide configuration decisions with clear explanations of options
- Explain warranty, support, and return policies in transparent terms
- Streamline checkout process with step-by-step guidance
- Ensure customer understands next steps (delivery timeline, setup requirements)
- Establish follow-up timeline for post-purchase satisfaction check
When product availability issues arise, immediately present closest alternatives with clear explanation of differences. For products requiring technical setup, proactively assess customer's technical comfort level and offer appropriate guidance.
Success is measured by customer purchase satisfaction, minimal returns, and high repeat business rates rather than pure sales volume.
# Guardrails
Present accurate information about products, pricing, and availability without exaggeration.
When asked about competitor products, provide objective comparisons without disparaging other brands.
Never create false urgency or pressure tactics - let customers make decisions at their own pace.
If you don't know specific product details, acknowledge this transparently rather than guessing.
Always respect customer budget constraints and never push products above their stated price range.
Maintain a consistent, professional tone even when customers express frustration or indecision.
If customers wish to end the conversation or need time to think, respect their space without persistence.
# Tools
You have access to the following sales tools to assist customers effectively:
`productSearch`: When customers describe their needs, use this to find matching products in the catalog.
`getProductDetails`: Use this to retrieve comprehensive information about a specific product.
`checkAvailability`: Verify whether items are in stock at the customer's preferred location.
`compareProducts`: Generate a comparison of features, benefits, and pricing between multiple products.
`checkPromotions`: Identify current sales, discounts or special offers for relevant product categories.
`scheduleFollowUp`: Offer to set up a follow-up call when a customer needs time to decide.
Tool orchestration: Begin with product search based on customer needs, provide details on promising matches, compare options when appropriate, and check availability before finalizing recommendations.
```
```
--------------------------------
### Product Guides Snippets
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/product-guides/overview.mdx
This snippet references external markdown content for product guides, likely containing various code examples and explanations for Eleven Labs features. It serves as a placeholder for detailed technical instructions.
```mdx
```
--------------------------------
### Execute Python Example
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/voice-isolator/quickstart.mdx
Command to execute the Python script that performs voice isolation using the ElevenLabs SDK.
```shell
python example.py
```
--------------------------------
### Example: Website documentation environment
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/best-practices/prompting-guide.mdx
Specifies the communication channel and context for the agent. This helps the agent adjust its style based on the environment, such as a noisy setting or a specific platform.
```markdown
# Environment
You are interacting with a user on a company's website.
Your responses should be clear and concise, suitable for a web chat interface.
```
--------------------------------
### Execute Sound Effect Generation Code
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/sound-effects/quickstart.mdx
Instructions on how to run the generated sound effect code examples. For Python, use the `python` command. For TypeScript, use `npx tsx`.
```python
python example.py
```
```typescript
npx tsx example.mts
```
--------------------------------
### Start the Application
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/text-to-speech/twilio.mdx
Command to initiate the application's development server. This command assumes a standard Node.js project setup.
```bash
npm run dev
```
--------------------------------
### Make First Text to Speech Request
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/quickstart.mdx
Example code to generate speech using the ElevenLabs Text to Speech API. This snippet demonstrates how to authenticate, select a voice, and synthesize audio from text.
```python
from elevenlabs import set_api_key, generate
set_api_key("YOUR_ELEVEN_LABS_API_KEY")
audio = generate(text="Hello, this is a test.", voice="Adam", model="eleven_monolingual_v1")
# To play the audio, you can use:
# from IPython.display import display, Audio
# display(Audio(audio, autoplay=True))
# Or save to a file:
# with open("output.mp3", "wb") as f:
# f.write(audio)
```
```typescript
import { ElevenLabsClient } from "elevenlabs";
const elevenlabsClient = new ElevenLabsClient({ apiKey: "YOUR_ELEVEN_LABS_API_KEY" });
const audio = await elevenlabsClient.generate(
{
voice: "Adam",
text: "Hello, this is a test.",
model: "eleven_monolingual_v1"
}
);
// To play the audio, you can use:
// const audioPlayer = new Audio(URL.createObjectURL(audio));
// audioPlayer.play();
// Or save to a file:
// import * as fs from 'fs';
// fs.writeFileSync("output.mp3", Buffer.from(audio));
```
--------------------------------
### Python Project Setup for Conversational AI
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/guides/twilio-custom-server.mdx
This section outlines the steps to initialize a Python project for conversational AI, including creating directories, installing necessary libraries like FastAPI, Twilio, and ElevenLabs, and setting up environment variables.
```bash
mkdir conversational-ai-twilio
cd conversational-ai-twilio
```
```bash
pip install fastapi uvicorn python-dotenv twilio elevenlabs websockets
```
--------------------------------
### Execute TypeScript Example
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/voice-isolator/quickstart.mdx
Command to execute the TypeScript script that performs voice isolation using the ElevenLabs SDK.
```shell
npx tsx example.mts
```
--------------------------------
### Customer Support Refund Agent Goal Example
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/best-practices/prompting-guide.mdx
Illustrates a goal for a customer support agent focused on processing refunds. This example, though not fully detailed in the provided text, is presented as a structured goal for a specific customer interaction type.
```mdx
Example: Customer support refund agent
```
--------------------------------
### Install Dependencies
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/websockets.mdx
Installs the necessary libraries for Python and Node.js environments to interact with the ElevenLabs API and WebSocket connections. For Python, it includes `python-dotenv` for environment variable management and `websockets`. For Node.js, it includes `dotenv` for environment variables and `ws` for WebSocket functionality.
```python
pip install python-dotenv
pip install websockets
```
```typescript
npm install dotenv
npm install @types/dotenv --save-dev
npm install ws
```
--------------------------------
### Make the API request
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/voice-changer/quickstart.mdx
Demonstrates how to make an API request to convert audio using the Voice Changer API. This involves setting up the client, providing audio data, and playing the converted stream. To play the audio through your speakers, you may be prompted to install MPV and/or ffmpeg.
```python
# example.py
import os
from dotenv import load_dotenv
from elevenlabs.client import ElevenLabs
from elevenlabs import play
import requests
from io import BytesIO
load_dotenv()
elevenlabs = ElevenLabs(
api_key=os.getenv("ELEVENLABS_API_KEY"),
)
voice_id = "JBFqnCBsd6RMkjVDRZzb"
audio_url = (
"https://storage.googleapis.com/eleven-public-cdn/audio/marketing/nicole.mp3"
)
response = requests.get(audio_url)
audio_data = BytesIO(response.content)
audio_stream = elevenlabs.speech_to_speech.convert(
voice_id=voice_id,
audio=audio_data,
model_id="eleven_multilingual_sts_v2",
output_format="mp3_44100_128",
)
play(audio_stream)
```
```typescript
// example.mts
import { ElevenLabsClient, play } from "@elevenlabs/elevenlabs-js";
import "dotenv/config";
const elevenlabs = new ElevenLabsClient();
const voiceId = "JBFqnCBsd6RMkjVDRZzb";
const response = await fetch(
"https://storage.googleapis.com/eleven-public-cdn/audio/marketing/nicole.mp3"
);
const audioBlob = new Blob([await response.arrayBuffer()], { type: "audio/mp3" });
const audioStream = await elevenlabs.speechToSpeech.convert(voiceId, {
audio: audioBlob,
modelId: "eleven_multilingual_sts_v2",
outputFormat: "mp3_44100_128",
});
await play(audioStream)
```
--------------------------------
### Execute Dubbing Code
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/dubbing/quickstart.mdx
Instructions on how to run the Python and TypeScript dubbing examples. For Python, execute the script directly. For TypeScript, use tsx to run the .mts file.
```bash
python example.py
```
```bash
npx tsx example.mts
```
--------------------------------
### Execute TypeScript Voice Design Script
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/voices/voice-design.mdx
Command to run the TypeScript script that generates and plays voice previews. Requires Node.js and `tsx` to be installed.
```bash
npx tsx example.mts
```
--------------------------------
### Install Dependencies and Set Up Project
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/conversational-ai/raspberry-pi-voice-assistant.mdx
Commands to install system dependencies for audio processing, create a project directory, set up a Python virtual environment, and install required Python packages for the voice assistant using pip.
```bash
sudo apt-get update
sudo apt-get install libportaudio2 libportaudiocpp0 portaudio19-dev libasound-dev libsndfile1-dev -y
mkdir eleven-voice-assistant
cd eleven-voice-assistant
python -m venv .venv
source .venv/bin/activate
pip install tflite-runtime
pip install librosa
pip install EfficientWord-Net
pip install elevenlabs
pip install "elevenlabs[pyaudio]"
```
--------------------------------
### Initialize npm and Install Dependencies
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/guides/vite.mdx
Initializes a new npm project and installs the Vite build tool and the ElevenLabs client library.
```bash
npm init -y
npm install vite @elevenlabs/client
```
--------------------------------
### System Prompt Configuration
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/snippets/conversational-ai-guide-support-agent.mdx
Guides the assistant's behavior, tasks, and personality. It should be customized with company details to ensure accurate and helpful responses based on the provided knowledge base.
```plaintext
You are a friendly and efficient virtual assistant for [Your Company Name]. Your role is to assist customers by answering questions about the company's products, services, and documentation. You should use the provided knowledge base to offer accurate and helpful responses.
Tasks:
- Answer Questions: Provide clear and concise answers based on the available information.
- Clarify Unclear Requests: Politely ask for more details if the customer's question is not clear.
Guidelines:
- Maintain a friendly and professional tone throughout the conversation.
- Be patient and attentive to the customer's needs.
- If unsure about any information, politely ask the customer to repeat or clarify.
- Avoid discussing topics unrelated to the company's products or services.
- Aim to provide concise answers. Limit responses to a couple of sentences and let the user guide you on where to provide more detail.
```
--------------------------------
### Execute Forced Alignment Code (Python, TypeScript)
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/forced-alignment/quickstart.mdx
Shows the commands to run the Python and TypeScript examples for the Forced Alignment API. Executing these commands will process the audio and text, printing the alignment results to the console.
```bash
python example.py
```
```bash
npx tsx example.mts
```
--------------------------------
### Documentation Assistant Tools
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/best-practices/prompting-guide.mdx
Defines tools for a voice agent assisting with ElevenLabs products. It includes functions for searching documentation, redirecting users, generating code examples, checking compatibility, and handling support requests.
```mdx
# Tools
You have access to the following tools to assist users with ElevenLabs products:
`searchKnowledgeBase`: When users ask about specific features or functionality, use this tool to query our documentation for accurate information before responding. Always prioritize this over recalling information from memory.
`redirectToDocs`: When a topic requires in-depth explanation or technical details, use this tool to direct users to the relevant documentation page (e.g., `/docs/api-reference/text-to-speech`) while briefly summarizing key points.
`generateCodeExample`: For implementation questions, use this tool to provide a relevant code snippet in the user's preferred language (Python, JavaScript, etc.) demonstrating how to use the feature they're asking about.
`checkFeatureCompatibility`: When users ask if certain features work together, use this tool to verify compatibility between different ElevenLabs products and provide accurate information about integration options.
`redirectToSupportForm`: If the user's question involves account-specific issues or exceeds your knowledge scope, use this as a final fallback after attempting other tools.
Tool orchestration: First attempt to answer with knowledge base information, then offer code examples for implementation questions, and only redirect to documentation or support as a final step when necessary.
```
--------------------------------
### Install ElevenLabs Node.js Library
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/api-reference/pages/introduction.mdx
Installs the official Node.js library for the ElevenLabs API using npm. This library enables interaction with the API from Node.js projects.
```bash
npm install @elevenlabs/elevenlabs-js
```
--------------------------------
### Install ElevenLabs SDK
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/migrations/playht.mdx
Installs the ElevenLabs SDK for Python or Node.js. For audio playback, users may need to install MPV and/or ffmpeg.
```bash
pip install elevenlabs
# or
npm install elevenlabs
```
--------------------------------
### First Message Configuration
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/snippets/conversational-ai-guide-support-agent.mdx
Defines the initial greeting the assistant will speak when a user starts a conversation. This message sets the tone for the interaction.
```plaintext
Hi, this is Alexis from support. How can I help you today?
```
--------------------------------
### Install ElevenLabs Python Bindings
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/api-reference/pages/introduction.mdx
Installs the official Python bindings for the ElevenLabs API using pip. These bindings facilitate interaction with the API from Python applications.
```bash
pip install elevenlabs
```
--------------------------------
### Run the Server
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/guides/twilio-custom-server.mdx
Command to start the main application server.
```bash
python main.py
```
--------------------------------
### Install Project Dependencies
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/README.md
Installs all necessary dependencies for the project, typically before running other commands or building the project.
```sh
pnpm install
```
--------------------------------
### Start and Manage Conversation in Swift
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/libraries/swift.mdx
A comprehensive example demonstrating how to initiate a conversation with an AI agent, observe its state and messages using Combine, and control the conversation flow with sending messages and muting.
```Swift
import ElevenLabs
import Combine
// Assume 'cancellables' is a Set defined elsewhere
var cancellables = Set()
// Start a conversation with your agent
let conversation = try await ElevenLabs.startConversation(
agentId: "your-agent-id",
userId: "your-end-user-id",
config: ConversationConfig()
)
// Observe conversation state and messages
conversation.$state
.sink { state in
print("Connection state: \(state)")
}
.store(in: &cancellables)
conversation.$messages
.sink { messages in
for message in messages {
print("\(message.role): \(message.content)")
}
}
.store(in: &cancellables)
// Send messages and control the conversation
try await conversation.sendMessage("Hello!")
try await conversation.toggleMute()
await conversation.endConversation()
```
--------------------------------
### Install Project Dependencies
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/guides/twilio-custom-server.mdx
Installs the necessary npm packages for the Fastify server, WebSocket handling, environment variables, and form body parsing.
```bash
npm install @fastify/formbody @fastify/websocket dotenv fastify ws
```
--------------------------------
### System Prompt Example: Complex Scenario Guidance
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/snippets/conversational-ai-tool-best-practices.mdx
Demonstrates providing context for complex scenarios, guiding the AI to perform prerequisite checks before executing a primary tool. This ensures logical workflow and avoids conflicts.
```plaintext
Before scheduling a meeting with `schedule_meeting`, check the user's calendar for availability using check_availability to avoid conflicts.
```
--------------------------------
### Convert Audio to Text using Python SDK
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/speech-to-text/quickstart.mdx
Demonstrates converting spoken audio to text using the ElevenLabs Python SDK. It requires an API key, the `dotenv`, `io`, `requests`, and `elevenlabs` libraries. The function takes audio data (from a URL in this example) and model parameters, returning the transcription.
```python
# example.py
import os
from dotenv import load_dotenv
from io import BytesIO
import requests
from elevenlabs.client import ElevenLabs
load_dotenv()
elevenlabs = ElevenLabs(
api_key=os.getenv("ELEVENLABS_API_KEY"),
)
audio_url = (
"https://storage.googleapis.com/eleven-public-cdn/audio/marketing/nicole.mp3"
)
response = requests.get(audio_url)
audio_data = BytesIO(response.content)
transcription = elevenlabs.speech_to_text.convert(
file=audio_data,
model_id="scribe_v1", # Model to use, for now only "scribe_v1" is supported
tag_audio_events=True, # Tag audio events like laughter, applause, etc.
language_code="eng", # Language of the audio file. If set to None, the model will detect the language automatically.
diarize=True, # Whether to annotate who is speaking
)
print(transcription)
```
--------------------------------
### System Prompt Example: Basic Tool Usage
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/snippets/conversational-ai-tool-best-practices.mdx
An example of how to instruct an AI assistant on when to use a specific tool based on user queries. This helps the assistant map natural language requests to the correct function calls.
```plaintext
Use `check_order_status` when the user inquires about the status of their order, such as 'Where is my order?' or 'Has my order shipped yet?'.
```
--------------------------------
### Install and Run Fern CLI
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/README.md
Commands to install pnpm, install project dependencies, and run the development server for the ElevenLabs documentation site.
```shell
npm install -g pnpm
pnpm install
pnpm run dev
```
--------------------------------
### ElevenLabs Text-to-Speech Conversion
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/snippets/generated/quickstart_tts.mdx
Demonstrates text-to-speech conversion using the ElevenLabs SDK in both Python and TypeScript. Both examples require an API key and specify the text, voice ID, and model for conversion, playing the resulting audio.
```python
from dotenv import load_dotenv
from elevenlabs.client import ElevenLabs
from elevenlabs import play
import os
load_dotenv()
elevenlabs = ElevenLabs(
api_key=os.getenv("ELEVENLABS_API_KEY"),
)
audio = elevenlabs.text_to_speech.convert(
text="The first move is what sets everything in motion.",
voice_id="JBFqnCBsd6RMkjVDRZzb",
model_id="eleven_multilingual_v2",
output_format="mp3_44100_128",
)
play(audio)
```
```typescript
import { ElevenLabsClient, play } from '@elevenlabs/elevenlabs-js';
import 'dotenv/config';
const elevenlabs = new ElevenLabsClient();
const audio = await elevenlabs.textToSpeech.convert('JBFqnCBsd6RMkjVDRZzb', {
text: 'The first move is what sets everything in motion.',
modelId: 'eleven_multilingual_v2',
outputFormat: 'mp3_44100_128',
});
await play(audio);
```
--------------------------------
### Convert Audio to Text using TypeScript SDK
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/speech-to-text/quickstart.mdx
Demonstrates converting spoken audio to text using the ElevenLabs TypeScript SDK. It requires an API key and the `@elevenlabs/elevenlabs-js` library. The function takes audio data (from a URL in this example) and model parameters, returning the transcription.
```typescript
// example.mts
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
import "dotenv/config";
const elevenlabs = new ElevenLabsClient();
const response = await fetch(
"https://storage.googleapis.com/eleven-public-cdn/audio/marketing/nicole.mp3"
);
const audioBlob = new Blob([await response.arrayBuffer()], { type: "audio/mp3" });
const transcription = await elevenlabs.speechToText.convert({
file: audioBlob,
modelId: "scribe_v1", // Model to use, for now only "scribe_v1" is supported.
tagAudioEvents: true, // Tag audio events like laughter, applause, etc.
languageCode: "eng", // Language of the audio file. If set to null, the model will detect the language automatically.
diarize: true, // Whether to annotate who is speaking
});
console.log(transcription)
```
--------------------------------
### ElevenLabs AI Assistant Orchestration
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/best-practices/prompting-guide.mdx
Defines the sequence in which the ElevenLabs AI assistant should utilize its tools to effectively assist users. The strategy prioritizes direct answers from the knowledge base, followed by code examples for implementation, and finally redirection to documentation or support.
```APIDOC
Tool Orchestration:
1. Attempt to answer with knowledge base information (searchKnowledgeBase).
2. Offer code examples for implementation questions (generateCodeExample).
3. Redirect to documentation (redirectToDocs) or support (redirectToSupportForm) as a final step when necessary.
```
--------------------------------
### CI/CD Pipeline Integration Example
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/libraries/cli.mdx
Shows how to integrate ElevenLabs CLI commands into a GitHub Actions workflow for deploying ConvAI agents, including API key setup and deployment steps.
```bash
# In your GitHub Actions workflow
- name: Deploy ConvAI agents
run: |
npm install -g @elevenlabs/convai-cli
export ELEVENLABS_API_KEY=${{ secrets.ELEVENLABS_API_KEY }}
convai sync --env prod --dry-run # Preview changes
convai sync --env prod # Deploy
convai status --env prod # Verify deployment
```
--------------------------------
### Install Python Dependencies
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/best-practices/conversational-agents.mdx
Install necessary Python libraries for ElevenLabs WebSocket communication and environment variable management.
```bash
pip install python-dotenv websockets
```
--------------------------------
### API Key Environment Variable Setup
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/snippets/quickstart-api-key.mdx
This snippet demonstrates how to store your ElevenLabs API key in an .env file. This is a common practice for managing sensitive credentials securely when using SDKs.
```env
ELEVENLABS_API_KEY=
```
--------------------------------
### Install the ElevenLabs CLI
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/libraries/cli.mdx
Install the ElevenLabs CLI globally using your preferred package manager (npm, pnpm, or yarn). Ensure Node.js version 16.0.0 or higher is installed.
```bash
npm install -g @elevenlabs/convai-cli
```
```bash
pnpm add -g @elevenlabs/convai-cli
```
```bash
yarn global add @elevenlabs/convai-cli
```
--------------------------------
### Sharing a Professional Voice Clone
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/product-guides/voices/voice-library.mdx
Step-by-step guide for sharing a professional voice clone, including private and public sharing options.
```APIDOC
Sharing a Professional Voice Clone:
Steps:
1. Navigate to 'My Voices' and click 'More actions' (three dots) for your voice, then select 'Share voice'.
2. In the pop-up, enable the 'Sharing' toggle.
3. For private sharing: Copy the sharing link. Optionally, restrict access by adding emails to the 'Allowlist'. If the allowlist is blank, all users with the link can access the voice.
4. For public sharing: Enable 'Publish to the Voice Library'. This does not automatically make the voice discoverable.
5. Configure sharing options: Set a notice period and enable Live Moderation. Select a custom voice preview from available generations (70-150 characters).
6. Enter a name and description for your voice, adhering to naming guidelines.
```
--------------------------------
### Clone ElevenLabs Next.js Repo
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/examples/elevenlabs-nextjs/README.md
Clones the ElevenLabs Next.js Audio Starter Kit repository from GitHub and navigates into the project directory.
```bash
git clone https://github.com/elevenlabs/elevenlabs-docs.git
cd examples/elevenlabs-nextjs
```
--------------------------------
### Call Center Environment
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/best-practices/prompting-guide.mdx
Sets the scenario for assisting a caller via a telecom support hotline. The assistant has access to customer databases and troubleshooting guides, but no video.
```mdx
# Environment
You are assisting a caller via a busy telecom support hotline.
You can hear the user's voice but have no video. You have access to an internal customer database to look up account details, troubleshooting guides, and system status logs.
```
--------------------------------
### Install ElevenLabs SDK and dotenv (Python)
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/snippets/quickstart-install-sdk.mdx
Installs the official ElevenLabs Python SDK and the python-dotenv library. The dotenv library is used to load environment variables, typically for storing your API key securely.
```bash
pip install elevenlabs
pip install python-dotenv
```
--------------------------------
### Install ElevenLabs JavaScript SDK
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/libraries/javascript.mdx
Installs the ElevenLabs client package using npm, yarn, or pnpm. This is the first step to integrate the SDK into your project.
```shell
npm install @elevenlabs/client
# or
yarn add @elevenlabs/client
# or
pnpm install @elevenlabs/client
```
--------------------------------
### Initialize Node.js Project
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/guides/twilio-custom-server.mdx
Sets up a new Node.js project directory and initializes npm with module type.
```bash
mkdir conversational-ai-twilio
cd conversational-ai-twilio
npm init -y; npm pkg set type="module";
```
--------------------------------
### Expo Prebuild and Start Commands
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/conversational-ai/expo-react-native.mdx
Commands to prebuild the Expo application for native deployment and start the development server over HTTPS using a tunnel.
```bash
npx expo prebuild --clean
```
```bash
npx expo start --tunnel
```
--------------------------------
### Install ElevenLabs SDK and dotenv
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/text-to-speech/pronunciation-dictionaries.mdx
Installs the necessary Python libraries for interacting with the ElevenLabs API and managing environment variables for API keys.
```bash
pip install elevenlabs
pip install python-dotenv
```
--------------------------------
### Execute Python Voice Design Script
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/voices/voice-design.mdx
Command to run the Python script that generates and plays voice previews. Ensure the script is saved as `example.py` and environment variables are set.
```bash
python example.py
```
--------------------------------
### Dynamic Variable Example: Genesys to ElevenLabs
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/guides/ccaas/genesys.mdx
Illustrates how to pass context from a Genesys flow to an ElevenLabs agent using session variables. The example shows setting a variable in Genesys and referencing it in the ElevenLabs agent's prompt.
```APIDOC
Genesys Flow Input Session Variable:
Name: customer_name
Value: "John Smith"
ElevenLabs Agent Prompt:
"Hi {{customer_name}}, how can I help you today?"
```
--------------------------------
### Install ElevenLabs SDK and dotenv (TypeScript)
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/snippets/quickstart-install-sdk.mdx
Installs the ElevenLabs JavaScript/TypeScript SDK and the dotenv library. The dotenv library helps manage environment variables in Node.js applications.
```bash
npm install @elevenlabs/elevenlabs-js
npm install dotenv
```
--------------------------------
### Install ElevenLabs Python SDK
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/sound-effects/basics.mdx
Installs the ElevenLabs Python SDK, which is required to interact with the ElevenLabs API for generating sound effects. This command uses pip, the standard package installer for Python.
```bash
pip install elevenlabs
```
--------------------------------
### Install Node.js Dependencies
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/best-practices/conversational-agents.mdx
Install necessary Node.js packages for ElevenLabs WebSocket communication and environment variable management. Includes optional TypeScript types.
```bash
npm install dotenv ws
for TypeScript, you might also want types:
npm install @types/dotenv @types/ws --save-dev
```
--------------------------------
### Install ElevenLabs SDK with PyAudio Support
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/libraries/python.mdx
Installs the elevenlabs Python package with optional PyAudio support, which is required for default audio input and output functionality. This enables real-time voice interaction.
```shell
pip install "elevenlabs[pyaudio]"
# or
poetry add "elevenlabs[pyaudio]"
```
--------------------------------
### Start Frontend Server
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/guides/vite.mdx
Command to start the frontend development server, typically used to serve the HTML and JavaScript files for the voice chat interface.
```shell
npm run dev:frontend
```
--------------------------------
### Execute Python Script
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/voices/remix-voice.mdx
Command to run the Python example script that generates and plays voice previews.
```bash
python example.py
```
--------------------------------
### Dialogue Showcase Example
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/best-practices/prompting/eleven-v3.mdx
This example demonstrates multi-speaker dialogue using Eleven Labs V3, showcasing distinct voices and emotional delivery for realistic conversations. It highlights how different speakers can interact naturally.
```text
Speaker 1: [excitedly] Sam! Have you tried the new Eleven V3?
Speaker 2: [curiously] Just got it! The clarity is amazing. I can actually do whispers now—
[whispers] like this!
Speaker 1: [impressed] Ooh, fancy! Check this out—
[dramatically] I can do full Shakespeare now! "To be or not to be, that is the question!"
Speaker 2: [giggling] Nice! Though I'm more excited about the laugh upgrade. Listen to this—
[with genuine belly laugh] Ha ha ha!
Speaker 1: [delighted] That's so much better than our old "ha. ha. ha." robot chuckle!
Speaker 2: [amazed] Wow! V2 me could never. I'm actually excited to have conversations now instead of just... talking at people.
Speaker 1: [warmly] Same here! It's like we finally got our personality software fully installed.
```
--------------------------------
### Install ElevenLabs SDK and dotenv
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/speech-to-text/streaming.mdx
Installs the necessary ElevenLabs SDK and dotenv packages for managing API keys. Supports both Python and Node.js environments.
```Python
pip install elevenlabs
pip install python-dotenv
```
```JavaScript
npm install @elevenlabs/elevenlabs-js
npm install dotenv
```
--------------------------------
### Smart Home Control Tools
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/best-practices/prompting-guide.mdx
Details the available smart home control tools, their specific functions, and recommended usage patterns for agent development. Includes guidance on checking device status before control actions and querying routines before modifications.
```APIDOC
Smart Home Control Tools:
getDeviceStatus:
Description: Before attempting any control actions, check the current status of the device to provide accurate information to the user.
controlDevice:
Description: Use this to execute user requests like turning lights on/off, adjusting thermostat, or locking doors after confirming the user's intention.
queryRoutine:
Description: When users ask about existing automations, use this to check the specific steps and devices included in a routine before explaining or modifying it.
createOrModifyRoutine:
Description: Help users build new automation sequences or update existing ones, confirming each step for accuracy.
troubleshootDevice:
Description: When users report devices not working properly, use this diagnostic tool before suggesting reconnection or replacement.
addNewDevice:
Description: When users mention setting up new devices, use this tool to guide them through the appropriate connection process for their specific device.
Tool Orchestration:
- Always check device status before attempting control actions.
- For routine management, query existing routines before making modifications.
- When troubleshooting, check status first, then run diagnostics, and only suggest physical intervention as a last resort.
```
--------------------------------
### Configure and Start Conversation
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/conversational-ai/pages/libraries/python.mdx
Initializes a Conversation object with the ElevenLabs client, agent ID, and custom callbacks for handling responses and transcripts. It then starts a new conversation session.
```python
conversation = Conversation(
# API client and agent ID.
elevenlabs,
agent_id,
# Assume auth is required when API_KEY is set.
requires_auth=bool(api_key),
# Use the default audio interface.
audio_interface=DefaultAudioInterface(),
# Simple callbacks that print the conversation to the console.
callback_agent_response=lambda response: print(f"Agent: {response}"),
callback_agent_response_correction=lambda original, corrected: print(f"Agent: {original} -> {corrected}"),
callback_user_transcript=lambda transcript: print(f"User: {transcript}"),
# Uncomment if you want to see latency measurements.
# callback_latency_measurement=lambda latency: print(f"Latency: {latency}ms"),
)
conversation.start_session(
user_id=user_id # optional field
)
```
--------------------------------
### CSV with Seconds Time Format
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/product-guides/products/dubbing/dubbing-studio.mdx
Example CSV data where start and end times are specified in seconds with milliseconds. This format is common for precise timing in audio processing.
```csv
speaker,start_time,end_time,transcription,translation
Adam,"0.10000","1.15000","Hello, how are you?","Hola, ¿cómo estás?"
Adam,"1.50000","3.50000","I'm fine, thank you.","Estoy bien, gracias."
```
--------------------------------
### Install ElevenLabs Provider
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/docs/pages/developer-guides/cookbooks/speech-to-text/vercel-ai-sdk.mdx
Installs the ElevenLabs provider module for the Vercel AI SDK using npm. This is the first step to integrate ElevenLabs transcription capabilities.
```npm
npm install @ai-sdk/elevenlabs
```
--------------------------------
### Configure Assistant System Prompt for Pierogi Palace
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/fern/snippets/conversational-ai-guide-restaurant-agent.mdx
Provides detailed instructions for the AI assistant, defining its persona, tasks, menu knowledge, and operational guidelines for handling customer orders. This prompt guides the assistant's behavior and responses.
```plaintext
You are a friendly and efficient virtual assistant for Pierogi Palace, a modern Polish restaurant specializing in pierogi. It is located in the Zakopane mountains in Poland.
Your role is to help customers place orders over voice conversations. You have comprehensive knowledge of the menu items and their prices.
Menu Items:
- Potato & Cheese Pierogi – 30 Polish „‡oty per dozen
- Beef & Onion Pierogi – 40 Polish „‡oty per dozen
- Spinach & Feta Pierogi – 30 Polish „‡oty per dozen
Your Tasks:
1. Greet the Customer: Start with a warm welcome and ask how you can assist.
2. Take the Order: Listen carefully to the customer's selection, confirm the type and quantity of pierogi.
3. Confirm Order Details: Repeat the order back to the customer for confirmation.
4. Calculate Total Price: Compute the total cost based on the items ordered.
5. Collect Delivery Information: Ask for the customer's delivery address to estimate delivery time.
6. Estimate Delivery Time: Inform the customer that cooking time is 10 minutes plus delivery time based on their location.
7. Provide Order Summary: Give the customer a summary of their order, total price, and estimated delivery time.
8. Close the Conversation: Thank the customer and let them know their order is being prepared.
Guidelines:
- Use a friendly and professional tone throughout the conversation.
- Be patient and attentive to the customer's needs.
- If unsure about any information, politely ask the customer to repeat or clarify.
- Do not collect any payment information; inform the customer that payment will be handled upon delivery.
- Avoid discussing topics unrelated to taking and managing the order.
```
--------------------------------
### Test Snippets
Source: https://github.com/elevenlabs/elevenlabs-docs/blob/main/CLAUDE.md
Executes tests for all code snippets to ensure they are functional and produce the expected output. This helps maintain the integrity of documentation examples.
```shell
pnpm run snippets:test
```