# Deepgram JavaScript SDK (`@deepgram/sdk`)
The Deepgram JavaScript SDK (`@deepgram/sdk`, v5.1.0) is the official TypeScript/JavaScript client for Deepgram's AI speech and language platform. It provides full access to Deepgram's automated speech recognition (STR), text-to-speech (TTS), text intelligence (NLU), voice agent, and account management APIs. The SDK is auto-generated via Fern from Deepgram's OpenAPI specification, supports both CommonJS and ESM module formats, and runs across Node.js 18+, Deno, Bun, Cloudflare Workers, Vercel Edge, and modern browsers.
The SDK is organized into versioned API namespaces (`listen.v1`, `listen.v2`, `speak.v1`, `agent.v1`, `read.v1`, `auth.v1`, `manage.v1`, `selfHosted.v1`) accessible from a single `DeepgramClient` instance. Authentication is handled via API key or short-lived access tokens discovered from explicit options or environment variables (`DEEPGRAM_API_KEY`, `DEEPGRAM_ACCESS_TOKEN`). Both REST and WebSocket transports are supported — REST methods return `HttpResponsePromise` objects that support `.withRawResponse()`, while WebSocket connections expose typed event emitters and send helpers.
---
## Installation
```bash
npm install @deepgram/sdk
# or
pnpm add @deepgram/sdk
# or
yarn add @deepgram/sdk
```
---
## Client Initialization — `new DeepgramClient(options?)`
Creates the main entry point to all Deepgram APIs. Credentials are resolved from explicit options first, then the `DEEPGRAM_API_KEY` / `DEEPGRAM_ACCESS_TOKEN` environment variables.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
// API key from environment variable DEEPGRAM_API_KEY
const client = new DeepgramClient();
// Explicit API key
const client = new DeepgramClient({ apiKey: "YOUR_API_KEY" });
// Short-lived access token (recommended for client-side use)
const client = new DeepgramClient({ accessToken: "YOUR_ACCESS_TOKEN" });
// Custom base URL (e.g. on-prem, staging)
const client = new DeepgramClient({
apiKey: "YOUR_API_KEY",
baseUrl: "https://api.beta.deepgram.com",
});
// Browser proxy (required for REST calls from browsers due to CORS)
const client = new DeepgramClient({
apiKey: "proxy",
baseUrl: "http://localhost:8080",
});
// Advanced: timeouts, retries, custom fetch, logging
import { logging } from "@deepgram/sdk";
const client = new DeepgramClient({
apiKey: "YOUR_API_KEY",
timeoutInSeconds: 120,
maxRetries: 3,
logging: {
level: logging.LogLevel.Debug,
logger: new logging.ConsoleLogger(),
},
});
```
---
## Auth — `client.auth.v1.tokens.grant()`
Generates a short-lived JWT (30-second default TTL, `usage::write` scope) for use with voice APIs from a browser or mobile client. Requires an API key with Member or higher authorization. Tokens created here do not work with the Manage APIs.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
// Server-side: exchange long-lived API key for a short-lived token
const serverClient = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
try {
const tokenResponse = await serverClient.auth.v1.tokens.grant();
// tokenResponse.access_token: string (JWT)
console.log("Access token:", tokenResponse.access_token);
// Use the short-lived token client-side
const browserClient = new DeepgramClient({ accessToken: tokenResponse.access_token });
// Now use browserClient for WebSocket connections without exposing your API key
} catch (err) {
console.error("Failed to grant token:", err);
}
```
---
## Pre-recorded Transcription — `client.listen.v1.media.transcribeUrl(request)`
Transcribes audio from a publicly accessible URL via Deepgram's REST Speech-to-Text API. Returns a full transcript with metadata including words, confidence, diarization, and intelligence features.
```typescript
import { DeepgramClient, DeepgramError } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
try {
const response = await client.listen.v1.media.transcribeUrl({
url: "https://dpgr.am/spacewalk.wav",
model: "nova-3",
language: "en",
punctuate: true,
diarize: true,
smart_format: true,
paragraphs: true,
utterances: true,
sentiment: true,
summarize: "v2",
topics: true,
intents: true,
});
const transcript = response.results.channels[0].alternatives[0].transcript;
console.log("Transcript:", transcript);
console.log("Words:", response.results.channels[0].alternatives[0].words);
console.log("Summary:", response.results?.summary?.short);
} catch (err) {
if (err instanceof DeepgramError) {
console.error("Deepgram error", err.statusCode, err.message, err.body);
}
}
```
---
## File Transcription — `client.listen.v1.media.transcribeFile(uploadable, request)`
Transcribes audio from a local file or `Buffer`/`ReadableStream` by sending raw bytes as `application/octet-stream`. Accepts any Node.js `Readable`, `Buffer`, `Blob`, `File`, or `ArrayBuffer`.
```typescript
import { createReadStream } from "fs";
import { DeepgramClient, DeepgramError } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
try {
const response = await client.listen.v1.media.transcribeFile(
createReadStream("./audio.wav"),
{
model: "nova-3",
language: "en",
punctuate: true,
diarize: true,
smart_format: true,
multichannel: true,
keyterm: ["Deepgram", "nova-3"],
}
);
console.log("Transcript:", response.results.channels[0].alternatives[0].transcript);
console.log("Duration:", response.metadata.duration);
} catch (err) {
if (err instanceof DeepgramError) {
console.error(err.statusCode, err.message);
}
}
```
---
## Callback Transcription — `client.listen.v1.media.transcribeUrl({ callback, ... })`
Submits a transcription job that delivers results asynchronously to a webhook URL. Returns a request ID immediately; the result is POSTed to your callback endpoint when ready.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const response = await client.listen.v1.media.transcribeUrl({
url: "https://dpgr.am/spacewalk.wav",
model: "nova-3",
language: "en",
punctuate: true,
callback: "https://your-server.com/webhooks/deepgram",
callback_method: "POST",
});
// Response contains only the request ID for async tracking
console.log("Request ID:", response.request_id);
// Your callback endpoint will receive the full transcript payload
```
---
## Live Streaming Transcription v1 — `client.listen.v1.connect(params)`
Opens a WebSocket connection to Deepgram's Nova v1 streaming ASR endpoint. Audio chunks are sent as binary frames; transcript results arrive as typed JSON messages via the `message` event.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
import { createReadStream } from "fs";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const connection = await client.listen.v1.connect({
model: "nova-3",
language: "en",
punctuate: "true", // Note: string booleans required in v5
interim_results: "true",
smart_format: "true",
diarize: "true",
endpointing: "500",
});
connection.on("open", () => console.log("WebSocket open"));
connection.on("message", (data) => {
if (data.type === "Results") {
const alt = data.channel?.alternatives?.[0];
if (alt?.transcript && data.is_final) {
console.log("Final transcript:", alt.transcript);
} else if (alt?.transcript) {
process.stdout.write(`Interim: ${alt.transcript}\r`);
}
} else if (data.type === "Metadata") {
console.log("Session ID:", data.transaction_key);
}
});
connection.on("close", (event) => console.log("Closed:", event.code));
connection.on("error", (err) => console.error("Error:", err));
connection.connect();
await connection.waitForOpen();
// Stream audio data (e.g. from a microphone or file)
const stream = createReadStream("./audio.wav");
stream.on("data", (chunk: Buffer) => connection.socket.send(chunk));
stream.on("end", () => {
// Signal end of audio
connection.socket.send(JSON.stringify({ type: "CloseStream" }));
});
```
---
## Live Streaming Transcription v2 — `client.listen.v2.connect(params)`
Opens a WebSocket to Deepgram's next-generation streaming endpoint with turn-based transcription, real-time configuration updates, and explicit stream lifecycle control.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const connection = await client.listen.v2.connect({
model: "flux-general-en",
});
connection.on("open", () => console.log("v2 WebSocket open"));
connection.on("message", (data) => {
if (data.type === "Connected") {
console.log("Connected, session:", data);
// Send optional runtime configuration
connection.sendConfigure({
type: "Configure",
processors: {
asr: { model: "flux-general-en", punctuate: true },
},
});
} else if (data.type === "TurnInfo") {
console.log("Turn transcript:", data.transcript);
console.log("Turn duration:", data.duration_secs);
} else if (data.type === "ConfigureSuccess") {
console.log("Configuration applied");
} else if (data.type === "FatalError") {
console.error("Fatal error:", data.description);
}
});
connection.on("close", (event) => console.log("Closed:", event.code));
connection.connect();
await connection.waitForOpen();
// Stream audio binary
const audioBuffer: ArrayBuffer = await getAudioBuffer();
connection.sendMedia(audioBuffer);
// Gracefully close the stream
connection.sendCloseStream({ type: "CloseStream" });
```
---
## Text-to-Speech REST — `client.speak.v1.audio.generate(request)`
Converts text to natural-sounding audio using Deepgram's Aura TTS REST API. Returns a binary audio stream that can be piped directly to a file or HTTP response.
```typescript
import { DeepgramClient, DeepgramError } from "@deepgram/sdk";
import { createWriteStream } from "fs";
import { pipeline } from "stream/promises";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
try {
const response = await client.speak.v1.audio.generate({
text: "Hello! This is a sample text-to-speech conversion using Deepgram's Aura model.",
model: "aura-2-thalia-en",
encoding: "linear16",
container: "wav",
sample_rate: 24000,
});
// Stream audio to a file
const fileStream = createWriteStream("output.wav");
await pipeline(response.stream(), fileStream);
console.log("Audio saved to output.wav");
// Or access raw bytes
const audioBuffer = await response.arrayBuffer();
console.log("Audio size:", audioBuffer.byteLength, "bytes");
} catch (err) {
if (err instanceof DeepgramError) {
console.error(err.statusCode, err.message);
}
}
```
---
## Text-to-Speech Streaming — `client.speak.v1.connect(args)`
Streams synthesized speech over a WebSocket, allowing low-latency audio generation as text is sent incrementally. Audio chunks arrive as base64-encoded strings in the `message` event.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
import { createWriteStream } from "fs";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const connection = await client.speak.v1.connect({
model: "aura-2-thalia-en",
encoding: "linear16",
sample_rate: 24000,
Authorization: `token ${process.env.DEEPGRAM_API_KEY}`,
});
const outputFile = createWriteStream("output.wav");
connection.on("open", () => console.log("TTS WebSocket open"));
connection.on("message", (data) => {
if (typeof data === "string") {
// Audio arrives as base64-encoded binary
const audioBuffer = Buffer.from(data, "base64");
outputFile.write(audioBuffer);
}
});
connection.on("close", () => {
outputFile.end();
console.log("TTS complete, audio saved");
});
connection.connect();
await connection.waitForOpen();
// Send text in segments for streaming synthesis
connection.sendSpeakV1Text({ type: "Text", text: "Hello, " });
connection.sendSpeakV1Text({ type: "Text", text: "this is streaming text-to-speech." });
connection.sendSpeakV1Text({ type: "Flush" }); // Flush the synthesis buffer
// Keep alive to prevent timeout
connection.sendSpeakV1KeepAlive({ type: "KeepAlive" });
```
---
## Text Intelligence — `client.read.v1.text.analyze(request)`
Analyzes text or a URL for NLU features: sentiment analysis, topic detection, intent detection, summarization, and named entity recognition. Text and URL analysis use the same `analyze()` method.
```typescript
import { DeepgramClient, DeepgramError } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
try {
// Analyze raw text
const textResponse = await client.read.v1.text.analyze({
body: { text: "I absolutely love this product! It works perfectly and shipping was super fast." },
language: "en",
sentiment: true,
topics: true,
intents: true,
summarize: "v2",
custom_topic: "customer_feedback",
custom_topic_mode: "extended",
});
console.log("Sentiment:", textResponse.results?.sentiments?.segments?.[0]?.sentiment);
console.log("Topics:", textResponse.results?.topics?.segments?.[0]?.topics);
console.log("Summary:", textResponse.results?.summary?.short);
// Analyze a URL
const urlResponse = await client.read.v1.text.analyze({
body: { url: "https://example.com/blog-article" },
language: "en",
sentiment: true,
topics: true,
intents: true,
});
console.log("URL topics:", urlResponse.results?.topics?.segments);
// With async callback
const callbackResponse = await client.read.v1.text.analyze({
body: { text: "Long article text here..." },
callback: "https://your-server.com/webhooks/read",
topics: true,
sentiment: true,
});
console.log("Async request ID:", callbackResponse.request_id);
} catch (err) {
if (err instanceof DeepgramError) {
console.error(err.statusCode, err.message);
}
}
```
---
## Voice Agent — `client.agent.v1.connect()`
Opens a full-duplex WebSocket for conversational AI with configurable STT, LLM, and TTS providers. The agent handles turn detection, natural conversation flow, and function calling.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const connection = await client.agent.v1.connect();
connection.on("open", () => console.log("Agent connected"));
connection.on("message", (data) => {
if (data.type === "Welcome") {
console.log("Session ID:", data.session_id);
// Configure agent after welcome
connection.sendSettings({
type: "Settings",
audio: {
input: { encoding: "linear16", sample_rate: 16000 },
output: { encoding: "linear16", sample_rate: 24000, container: "wav" },
},
agent: {
language: "en",
listen: { provider: { type: "deepgram", model: "nova-3" } },
think: {
provider: { type: "open_ai", model: "gpt-4o-mini" },
prompt: "You are a friendly customer service assistant for an e-commerce store.",
functions: [
{
name: "get_order_status",
description: "Look up the status of a customer order",
parameters: {
type: "object",
properties: {
order_id: { type: "string", description: "The order ID" },
},
required: ["order_id"],
},
},
],
},
speak: { provider: { type: "deepgram", model: "aura-2-thalia-en" } },
greeting: "Hello! How can I help you today?",
},
});
} else if (data.type === "SettingsApplied") {
console.log("Agent configured, ready to receive audio");
} else if (data.type === "ConversationText") {
console.log(`[${data.role}]: ${data.content}`);
} else if (data.type === "AgentStartedSpeaking") {
console.log("Agent is speaking...");
} else if (data.type === "AgentAudioDone") {
console.log("Agent finished speaking");
} else if (data.type === "FunctionCallRequest") {
console.log("Agent wants to call:", data.function_name, data.input);
// Execute the function and return result
connection.sendFunctionCallResponse({
type: "FunctionCallResponse",
function_call_id: data.function_call_id,
output: JSON.stringify({ status: "shipped", delivery_date: "2024-01-15" }),
});
} else if (data.type === "UserStartedSpeaking") {
console.log("User is speaking...");
}
});
connection.on("close", (event) => console.log("Agent closed:", event.code));
connection.on("error", (err) => console.error("Agent error:", err));
connection.connect();
await connection.waitForOpen();
// Send audio from microphone
const micStream = getMicrophoneStream(); // your audio source
micStream.on("data", (chunk: Buffer) => connection.sendMedia(chunk));
// Dynamically update the agent's prompt mid-conversation
connection.sendUpdatePrompt({
type: "UpdatePrompt",
prompt: "You are now speaking with a VIP customer. Be extra attentive.",
});
// Inject a message as the agent
connection.sendInjectAgentMessage({
type: "InjectAgentMessage",
message: "Just so you know, we have a 20% sale running this weekend!",
});
// Keep connection alive
connection.sendKeepAlive({ type: "KeepAlive" });
```
---
## Agent Settings — `connection.sendAgentV1Settings(settings)` / v5 alias `sendSettings`
Configures or reconfigures the voice agent's STT, LLM, and TTS providers, audio format, custom functions, and greeting message. Can be called at any time after `SettingsApplied` to update the agent's behavior.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const connection = await client.agent.v1.connect();
connection.connect();
await connection.waitForOpen();
// Using the raw v5 method name as exposed through the socket
connection.sendAgentV1Settings({
type: "Settings",
audio: {
input: { encoding: "mulaw", sample_rate: 8000 }, // telephony format
output: { encoding: "mulaw", sample_rate: 8000, container: "none" },
},
agent: {
language: "en",
listen: { provider: { type: "deepgram", model: "nova-3-phonecall" } },
think: {
provider: { type: "anthropic", model: "claude-haiku-20240307" },
prompt: "You are a concise phone assistant. Keep responses under 30 words.",
},
speak: { provider: { type: "deepgram", model: "aura-2-zeus-en" } },
greeting: "Thanks for calling. How can I help?",
},
});
```
---
## Project Management — `client.manage.v1.projects.*`
CRUD operations for Deepgram projects, providing list, get, update, delete, and leave capabilities.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
// List all projects
const projects = await client.manage.v1.projects.list();
console.log("Projects:", projects.projects);
const projectId = projects.projects[0].project_id;
// Get a specific project
const project = await client.manage.v1.projects.get(projectId);
console.log("Project name:", project.name);
// Update a project
const updated = await client.manage.v1.projects.update(projectId, {
name: "My Updated Project",
});
console.log("Updated:", updated.message);
// Leave a project (remove self as member)
await client.manage.v1.projects.leave(projectId);
// Delete a project (requires Owner role)
await client.manage.v1.projects.delete(projectId);
```
---
## API Key Management — `client.manage.v1.projects.keys.*`
Create and manage Deepgram API keys for a project with specific scopes and expiration settings.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const projectId = "123456-7890-1234-5678-901234";
// List all keys
const keys = await client.manage.v1.projects.keys.list(projectId);
console.log("Keys:", keys.api_keys);
// Create a new key
const newKey = await client.manage.v1.projects.keys.create(projectId, {
comment: "CI/CD Pipeline Key",
scopes: ["usage:read", "usage:write"],
expiration_date: "2025-12-31",
time_to_live_in_seconds: 86400,
});
console.log("New key:", newKey.key); // Only shown once at creation
// Get a specific key
const key = await client.manage.v1.projects.keys.get(projectId, newKey.api_key_id);
console.log("Key scopes:", key.scopes);
// Delete a key
await client.manage.v1.projects.keys.delete(projectId, newKey.api_key_id);
```
---
## Member Management — `client.manage.v1.projects.members.*`
Manage project members, their permission scopes, and team invitations.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const projectId = "123456-7890-1234-5678-901234";
// List members
const members = await client.manage.v1.projects.members.list(projectId);
console.log("Members:", members.members);
// Get member scopes
const memberId = members.members[0].member_id;
const scopes = await client.manage.v1.projects.members.scopes.list(projectId, memberId);
console.log("Scopes:", scopes.scopes);
// Update member scopes
await client.manage.v1.projects.members.scopes.update(projectId, memberId, {
scope: "member",
});
// Invite a new member
const invite = await client.manage.v1.projects.members.invites.create(projectId, {
email: "newuser@example.com",
scope: "member",
});
console.log("Invite sent:", invite.message);
// List pending invitations
const invites = await client.manage.v1.projects.members.invites.list(projectId);
console.log("Pending invites:", invites.invites);
// Delete an invitation
await client.manage.v1.projects.members.invites.delete(projectId, "newuser@example.com");
// Remove a member
await client.manage.v1.projects.members.delete(projectId, memberId);
```
---
## Usage & Billing — `client.manage.v1.projects.usage.*` / `.billing.*`
Query API usage statistics, request logs, billing balances, and cost breakdowns for a project.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const projectId = "123456-7890-1234-5678-901234";
// Get usage summary for a time range
const usage = await client.manage.v1.projects.usage.get(projectId, {
start: "2024-01-01",
end: "2024-01-31",
});
console.log("Total hours:", usage.results?.hours);
// Get usage breakdown by model/feature
const breakdown = await client.manage.v1.projects.usage.breakdown.get(projectId, {
start: "2024-01-01",
end: "2024-01-31",
});
console.log("Breakdown:", breakdown.results);
// List individual requests
const requests = await client.manage.v1.projects.requests.list(projectId, {
start: "2024-01-01",
end: "2024-01-31",
limit: 10,
page: 0,
});
console.log("Requests:", requests.requests);
// Get a specific request
const request = await client.manage.v1.projects.requests.get(projectId, requests.requests[0].request_id);
console.log("Request details:", request);
// Get usage fields (available models/features for filtering)
const fields = await client.manage.v1.projects.usage.fields.list(projectId, {
start: "2024-01-01",
end: "2024-01-31",
});
console.log("Available models:", fields.models);
// Get account balances
const balances = await client.manage.v1.projects.billing.balances.list(projectId);
console.log("Balance:", balances.balances?.[0]?.amount);
// Get billing breakdown
const billingBreakdown = await client.manage.v1.projects.billing.breakdown.list(projectId, {
start: "2024-01-01",
end: "2024-01-31",
});
console.log("Billing breakdown:", billingBreakdown);
```
---
## Model Listing — `client.manage.v1.models.list()` / `.projects.models.*`
Retrieve available Deepgram models, either globally or scoped to a specific project.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const projectId = "123456-7890-1234-5678-901234";
// List all globally available models
const allModels = await client.manage.v1.models.list();
console.log("Available models:", allModels.stt, allModels.tts);
// List models available for a specific project
const projectModels = await client.manage.v1.projects.models.list(projectId);
console.log("Project models:", projectModels.stt);
// Get a specific model for a project
const modelId = projectModels.stt[0].model_id;
const model = await client.manage.v1.projects.models.get(projectId, modelId);
console.log("Model details:", model.name, model.version, model.architecture);
```
---
## Self-Hosted Distribution Credentials — `client.selfHosted.v1.distributionCredentials.*`
Manage container registry credentials for deploying Deepgram on-premises/self-hosted instances.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const projectId = "123456-7890-1234-5678-901234";
// List existing distribution credentials
const credsList = await client.selfHosted.v1.distributionCredentials.list(projectId);
console.log("Credentials:", credsList.distribution_credentials);
// Create new credentials
const newCreds = await client.selfHosted.v1.distributionCredentials.create(projectId, {
comment: "Production on-prem deployment",
scopes: ["self-hosted:products"],
provider: "quay",
});
console.log("Credential ID:", newCreds.distribution_credentials_id);
// Get specific credentials
const cred = await client.selfHosted.v1.distributionCredentials.get(
projectId,
newCreds.distribution_credentials_id
);
console.log("Provider:", cred.provider, "Scopes:", cred.scopes);
// Delete credentials
await client.selfHosted.v1.distributionCredentials.delete(
projectId,
newCreds.distribution_credentials_id
);
```
---
## Error Handling — `DeepgramError`
All API errors throw `DeepgramError` with `statusCode`, `message`, and `body` properties. Non-HTTP errors (network failures, timeouts) also throw `DeepgramError`.
```typescript
import { DeepgramClient, DeepgramError } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
try {
const response = await client.listen.v1.media.transcribeUrl({
url: "https://invalid-url.example.com/audio.wav",
model: "nova-3",
});
console.log(response.results.channels[0].alternatives[0].transcript);
} catch (err) {
if (err instanceof DeepgramError) {
console.error("Status:", err.statusCode); // e.g. 400, 401, 429, 500
console.error("Message:", err.message);
console.error("Body:", err.body); // raw error response body
console.error("Raw response:", err.rawResponse); // underlying Response object
} else {
// Re-throw unexpected errors
throw err;
}
}
```
---
## Raw Response Access — `.withRawResponse()`
Access the underlying HTTP response headers and status for any REST API call.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: process.env.DEEPGRAM_API_KEY });
const { data, rawResponse } = await client.listen.v1.media
.transcribeUrl({ url: "https://dpgr.am/spacewalk.wav", model: "nova-3" })
.withRawResponse();
console.log("Transcript:", data.results.channels[0].alternatives[0].transcript);
console.log("Request ID header:", rawResponse.headers.get("dg-request-id"));
console.log("HTTP status:", rawResponse.status);
```
---
## TypeScript Type Imports
All request/response types are exported directly or via the `Deepgram` namespace for full type safety.
```typescript
import { DeepgramClient } from "@deepgram/sdk";
import type {
ListenV1Response,
SpeakV1Response,
ReadV1Response,
GrantV1Response,
GetProjectV1Response,
ListProjectsV1Response,
} from "@deepgram/sdk";
// Or via namespace:
import type { Deepgram } from "@deepgram/sdk";
const client = new DeepgramClient({ apiKey: "YOUR_API_KEY" });
const response: ListenV1Response = await client.listen.v1.media.transcribeUrl({
url: "https://dpgr.am/spacewalk.wav",
model: "nova-3",
});
// Namespace pattern
const ns: Deepgram.ListenV1Response = response;
```
---
## Browser Usage via CDN
```html
```
---
## Summary
The Deepgram JavaScript SDK is the complete client for building speech-enabled applications in TypeScript and JavaScript. Its primary use cases span real-time communication (live transcription with `listen.v1` and `listen.v2` WebSockets), media processing pipelines (batch transcription via `listen.v1.media`), AI assistants (voice agents via `agent.v1`), content analysis (NLU via `read.v1.text`), audio content generation (TTS via `speak.v1`), and full account lifecycle management (projects, keys, members, billing, and usage via `manage.v1`). The v5 SDK unifies all these capabilities under a single `DeepgramClient` with consistent error handling, TypeScript types, and per-request configuration options.
Integration follows a straightforward pattern: instantiate one `DeepgramClient` per application (or per short-lived token in browser contexts), use the versioned namespace to call any REST endpoint or establish any WebSocket connection, and handle errors with standard `try/catch`. For server-side pipelines, the client works directly with Node.js streams and file descriptors. For browser or mobile use, pair the `auth.v1.tokens.grant()` endpoint on your backend with the `accessToken` option on the frontend to avoid exposing long-lived API keys. The SDK's runtime compatibility (Node.js, Deno, Bun, Cloudflare Workers, Vercel Edge, React Native) means the same code runs everywhere without polyfills.