### Cloning and Installing AI Pipe Project - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md These `bash` commands are used to clone the AI Pipe GitHub repository, navigate into the project directory, and install its Node.js dependencies using `npm`. This is the initial step for setting up the project locally. ```bash git clone https://github.com/sanand0/aipipe.git cd aipipe npm install ``` -------------------------------- ### Example Response for All User Usage History - JSONC Source: https://github.com/sanand0/aipipe/blob/main/README.md This JSONC snippet shows the expected response format for the `GET /admin/usage` endpoint. It contains an array of data objects, each detailing a user's email, date, and associated cost. ```jsonc { "data": [ { "email": "test@example.com", "date": "2025-04-18", "cost": 25.5 } // ... ] } ``` -------------------------------- ### Example OpenRouter Models List Response (JSON) Source: https://github.com/sanand0/aipipe/blob/main/README.md This JSON object shows an example response when listing available OpenRouter models via the AI Pipe proxy. It contains an array of model objects, each with an `id` and `name`, among other potential details. ```json { "data": [ { "id": "google/gemini-2.5-pro-preview-03-25", "name": "Google: Gemini 2.5 Pro Preview" // ... } ] } ``` -------------------------------- ### Testing Local AI Pipe Deployment - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md These `bash` commands are used to test a local AI Pipe deployment. They start the development server, run tests with a specified admin email, and query the `/usage` endpoint to verify the service is running and accessible. ```bash npm run dev # Runs at http://localhost:8787 ADMIN_EMAILS=admin@example.com npm test curl http://localhost:8787/usage -H "Authorization: $AIPIPE_TOKEN" ``` -------------------------------- ### Authenticating and Calling OpenAI Responses API with AI Pipe (JavaScript) Source: https://github.com/sanand0/aipipe/blob/main/public/index.html This snippet illustrates how to authenticate a user and interact with OpenAI models using AI Pipe's `/openrouter/v1/responses` endpoint. Similar to the chat completions example, it manages user authentication and then proxies a POST request to the specified OpenAI API, providing a unified interface for different LLM providers. ```JavaScript import { getProfile } from "https://aipipe.org/aipipe.js"; const { token, email } = getProfile(); if (!token) window.location = `https://aipipe.org/login?redirect=${window.location.href}`; const response = await fetch("https://aipipe.org/openrouter/v1/responses", { method: "POST", headers: { Authorization: `Bearer ${token}`, "Content-Type": "application/json" }, body: JSON.stringify({ "model": "openai/gpt-4.1-nano", "input": "What is 2 + 2?" }) }).then(r => r.json()); ``` -------------------------------- ### Example Response for Overwriting User Cost - JSON Source: https://github.com/sanand0/aipipe/blob/main/README.md This JSON snippet shows the expected response format for the `POST /admin/cost` endpoint. It confirms that the cost for the specified user and date has been successfully updated. ```json { "message": "Cost for user@example.com on 2025-04-18 set to 1.23" } ``` -------------------------------- ### Listing OpenRouter Models via AI Pipe (Bash) Source: https://github.com/sanand0/aipipe/blob/main/README.md This `curl` command demonstrates how to list available models from OpenRouter by proxying the request through AI Pipe. It sends a GET request to the `/openrouter/v1/models` endpoint, authenticated with the AI Pipe Token. ```bash curl https://aipipe.org/openrouter/v1/models -H "Authorization: $AIPIPE_TOKEN" ``` -------------------------------- ### Example AI Pipe Usage Data Response (JSON) Source: https://github.com/sanand0/aipipe/blob/main/README.md This JSON object illustrates the typical response structure when querying user usage data from the AI Pipe service. It includes details such as the user's email, total cost, usage over a specified period, and any applicable spending limits. ```json { "email": "user@example.com", "days": 7, "cost": 0.000137, "usage": [ { "date": "2025-04-16", "cost": 0.000137 } ], "limit": 0.1 } ``` -------------------------------- ### Example Response for Generated JWT Token - JSON Source: https://github.com/sanand0/aipipe/blob/main/README.md This JSON snippet shows the expected response format for the `GET /admin/token` endpoint. It contains a single `token` field with the generated JWT string. ```json { "token": "eyJhbGciOiJIUzI1NiI..." } ``` -------------------------------- ### Example OpenRouter Chat Completion Response (JSON) Source: https://github.com/sanand0/aipipe/blob/main/README.md This JSON object illustrates a typical response from an OpenRouter chat completion request made via the AI Pipe proxy. It includes details such as a unique ID, the provider, the model used, and the object type, indicating a successful completion. ```json { "id": "gen-...", "provider": "Google", "model": "google/gemini-2.0-flash-lite-001", "object": "chat.completion" // ... } ``` -------------------------------- ### Retrieving User Usage Data with AI Pipe (Bash) Source: https://github.com/sanand0/aipipe/blob/main/README.md This `curl` command demonstrates how to retrieve usage data for a user from the AI Pipe service. It sends a GET request to the `/usage` endpoint, authenticating with the AI Pipe Token in the Authorization header. ```bash curl https://aipipe.org/usage -H "Authorization: $AIPIPE_TOKEN" ``` -------------------------------- ### Initializing AI Pipe and User Authentication Check in JavaScript Source: https://github.com/sanand0/aipipe/blob/main/public/playground.html This snippet initializes the AI Pipe application by importing necessary utility functions, retrieving user profile information (token and email), and performing an authentication check. If no token is found, the user is redirected to a login page to ensure secure access to the AI services. ```JavaScript import { getProfile } from "./aipipe.js"; import { showUsage } from "./usage.js"; const { token, email } = getProfile(); if (!token) window.location = `login?redirect=${window.location.href}`; ``` -------------------------------- ### Displaying Initial Usage Information in JavaScript Source: https://github.com/sanand0/aipipe/blob/main/public/playground.html This snippet calls the `showUsage` function to display the current usage information immediately upon the script's execution or page load. It uses the previously retrieved token and email to fetch and present the user's AI Pipe usage statistics. ```JavaScript await showUsage($usage, token, email); ``` -------------------------------- ### Making a Chat Completion Request with uvx openai (Bash) Source: https://github.com/sanand0/aipipe/blob/main/README.md This command demonstrates how to make a chat completion request using the `uvx openai` CLI tool. It sends a user message 'Hello' to the 'gpt-4.1-nano' model, leveraging the previously configured AI Pipe proxy. ```bash uvx openai api chat.completions.create -m gpt-4.1-nano -g user "Hello" ``` -------------------------------- ### Listing OpenAI Models via AI Pipe Proxy - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md This `curl` command demonstrates how to list available OpenAI models by proxying the request through the AI Pipe service. It requires an `Authorization` header with a valid `$AIPIPE_TOKEN`. ```bash curl https://aipipe.org/openai/v1/models -H "Authorization: $AIPIPE_TOKEN" ``` -------------------------------- ### Running Specific AI Pipe Tests - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md This `bash` command demonstrates how to execute a subset of AI Pipe's tests using `npm test` with the `--grep` option. This allows developers to focus on specific test suites, such as those related to OpenAI functionality. ```bash npm test -- --grep 'OpenAI' ``` -------------------------------- ### Deploying and Testing AI Pipe on Cloudflare - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md These `bash` commands facilitate the deployment of AI Pipe to Cloudflare Workers and subsequent testing. `npm run deploy` pushes the application, while the following `npm test` command verifies the deployed instance using a specified base URL and admin email. ```bash npm run deploy # Test BASE_URL=https://aipipe.org ADMIN_EMAILS=admin@example.com npm test ``` -------------------------------- ### Authenticating and Calling OpenRouter Chat Completions API with AI Pipe (JavaScript) Source: https://github.com/sanand0/aipipe/blob/main/public/index.html This snippet demonstrates how to authenticate a user and make an API call to OpenRouter's chat completions endpoint via AI Pipe. It first retrieves the user's profile token, redirects to login if no token is found, and then uses the token to authorize a POST request to the `/openrouter/v1/chat/completions` endpoint. The `getProfile()` function handles token retrieval and storage. ```JavaScript import { getProfile } from "https://aipipe.org/aipipe.js"; const { token, email } = getProfile(); if (!token) window.location = `https://aipipe.org/login?redirect=${window.location.href}`; const response = await fetch("https://aipipe.org/openrouter/v1/chat/completions", { method: "POST", headers: { Authorization: `Bearer ${token}`, "Content-Type": "application/json" }, body: JSON.stringify({ "model": "openai/gpt-4.1-nano", "messages": [{ "role": "user", "content": "What is 2 + 2?" }] }) }).then(r => r.json()); ``` -------------------------------- ### Creating Development Environment Variables for AI Pipe - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md This `bash` snippet shows how to create the `.dev.vars` file, which stores sensitive environment variables for AI Pipe's local development. It includes generating a `AIPIPE_SECRET`, defining `ADMIN_EMAILS`, and setting API keys for OpenRouter and OpenAI. ```bash # Required: Your JWT signing key AIPIPE_SECRET=$(openssl rand -base64 12) # Optional: add email IDs of admin users separated by comma and/or whitespace. ADMIN_EMAILS="admin@example.com, admin2@example.com, ..." # Optional: Add only the APIs you need OPENROUTER_API_KEY=sk-or-v1-... OPENAI_API_KEY=sk-... ``` -------------------------------- ### Making a Chat Completion Request with uvx llm (Bash) Source: https://github.com/sanand0/aipipe/blob/main/README.md This command provides an alternative way to make a chat completion request using the `uvx llm` CLI tool. It sends a user message 'Hello' to the 'gpt-4o-mini' model, directly passing the AI Pipe Token as an argument. ```bash uvx llm 'Hello' -m gpt-4o-mini --key $AIPIPE_TOKEN ``` -------------------------------- ### Defining Cost Table Schema - SQL Source: https://github.com/sanand0/aipipe/blob/main/README.md This SQL `CREATE TABLE` statement defines the schema for the `cost` table, used to track daily cumulative cost per user. It includes columns for `email`, `date` (in YYYY-MM-DD UTC format), and `cost` (a number), with a composite primary key on `email` and `date`. ```sql CREATE TABLE cost ( email TEXT, -- User's email address date TEXT, -- YYYY-MM-DD in UTC cost NUMBER, -- Cumulative cost for the day PRIMARY KEY (email, date) ); ``` -------------------------------- ### Implementing LLM Provider Interface - JavaScript Source: https://github.com/sanand0/aipipe/blob/main/README.md This JavaScript object literal defines the required interface for each LLM provider in `src/providers.js`. It specifies a `base` URL for proxying, a `key` for the API key environment variable, and an asynchronous `cost` function to calculate request costs based on model and usage. ```javascript { base: "https://api.provider.com", // Base URL to proxy to key: "PROVIDER_API_KEY", // Environment variable with API key cost: async ({ model, usage }) => { // Calculate cost for a request return { cost: /* Calculate cost based on prompt & completion tokens */ } } } ``` -------------------------------- ### Configuring Budget and Security in AI Pipe - JavaScript Source: https://github.com/sanand0/aipipe/blob/main/README.md This JavaScript snippet from `src/config.js` defines budget limits for different users or domains and a `salt` object for invalidating user tokens. It allows administrators to control API usage and enhance security by invalidating stolen keys. ```javascript // Set a budget limit for specific email IDs or domains const budget = { "*": { limit: 0.1, days: 7 }, // Default fallback: low limits for unknown users. Use 0.001 to limit to free models. "blocked@example.com": { limit: 0, days: 1 }, // Blocked user: zero limit stops all operations "user@example.com": { limit: 10.0, days: 30 }, // Premium user with monthly high-volume allocation "@example.com": { limit: 1.0, days: 7 } // Domain-wide policy: moderate weekly quota for organization }; // If a user reports their key as stolen, add/change their salt to new random text. // That will invalidate their token. const salt = { "user@example.com": "random-text" }; ``` -------------------------------- ### Handling AI Playground Form Submission and Chat Completions API Call in JavaScript Source: https://github.com/sanand0/aipipe/blob/main/public/playground.html This snippet attaches an event listener to the 'playground-form'. Upon submission, it prevents the default form behavior, retrieves the selected model and user prompt, and displays a 'Generating...' message. It then makes an asynchronous POST request to the OpenRouter chat completions API, handling both successful responses and errors, and finally updates the usage display. ```JavaScript const $usage = document.querySelector("#usage"); document.getElementById('playground-form').addEventListener('submit', async (e) => { e.preventDefault(); const model = document.getElementById('model').value; const prompt = document.getElementById('prompt').value; const responseEl = document.getElementById('response'); responseEl.textContent = 'Generating...'; try { const response = await fetch("openrouter/v1/chat/completions", { method: "POST", headers: { Authorization: `Bearer ${token}`, "Content-Type": "application/json" }, body: JSON.stringify({ "model": model, "max_tokens": 1000, "messages": [{ "role": "user", "content": prompt }] }) }).then(r => r.json()); responseEl.textContent = JSON.stringify(response, null, 2); } catch (error) { responseEl.textContent = `Error: ${error.message}`; } await showUsage($usage, token, email); }); ``` -------------------------------- ### Configuring OpenAI API for AI Pipe Token (Bash) Source: https://github.com/sanand0/aipipe/blob/main/README.md This snippet sets environment variables to configure an OpenAI API compatible application to use the AI Pipe Token. It directs API requests to the AI Pipe proxy endpoint, allowing backend-less access to LLM APIs. ```bash export OPENAI_API_KEY=$AIPIPE_TOKEN export OPENAI_BASE_URL=https://aipipe.org/openai/v1 ``` -------------------------------- ### Making OpenRouter Chat Completion via AI Pipe (Bash) Source: https://github.com/sanand0/aipipe/blob/main/README.md This `curl` command demonstrates how to make a chat completion request to OpenRouter, proxied through AI Pipe. It sends a POST request with a JSON payload specifying the model and user message, authenticated with the AI Pipe Token. ```bash curl https://aipipe.org/openrouter/v1/chat/completions -H "Authorization: $AIPIPE_TOKEN" \ -H "Content-Type: application/json" \ -d '{"model": "google/gemini-2.0-flash-lite-001", "messages": [{ "role": "user", "content": "What is 2 + 2?" }] }' ``` -------------------------------- ### Client-Side LLM API Call with AI Pipe (HTML/JavaScript) Source: https://github.com/sanand0/aipipe/blob/main/README.md This HTML snippet demonstrates how to make a client-side LLM API call using AI Pipe. It first retrieves the user's AI Pipe token, redirects for login if necessary, and then uses the token to make a chat completion request to the OpenRouter API via the AI Pipe proxy. ```html ``` -------------------------------- ### Making OpenAI Chat Completions Request via AI Pipe Proxy - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md This `curl` command sends a chat completions request to the OpenAI API via the AI Pipe proxy. It specifies the model and input, requiring `Authorization` and `Content-Type` headers for a JSON payload. ```bash curl https://aipipe.org/openai/v1/responses -H "Authorization: $AIPIPE_TOKEN" \ -H "Content-Type: application/json" \ -d '{"model": "gpt-4.1-nano", "input": "What is 2 + 2?" }' ``` -------------------------------- ### Adding Secrets for Cloudflare Deployment - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md These `npx wrangler` commands are used to securely add environment variables as secrets to the Cloudflare Workers environment. This ensures that sensitive API keys and configuration values are available during production deployment without being exposed in the codebase. ```bash npx wrangler secret put AIPIPE_SECRET npx wrangler secret put ADMIN_EMAILS npx wrangler secret put OPENROUTER_API_KEY npx wrangler secret put OPENAI_API_KEY ``` -------------------------------- ### Creating OpenAI Embeddings via AI Pipe Proxy - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md This `curl` command demonstrates how to generate text embeddings using the OpenAI API through the AI Pipe proxy. It sends a JSON payload with the desired model and input text, requiring proper authentication and content type headers. ```bash curl https://aipipe.org/openai/v1/embeddings -H "Authorization: $AIPIPE_TOKEN" \ -H "Content-Type: application/json" \ -d '{"model": "text-embedding-3-small", "input": "What is 2 + 2?" }' ``` -------------------------------- ### Generating User JWT Token (Admin API) - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md This `curl` command generates a JWT token for a specified user's email. It requires administrator privileges and an `Authorization` header with a valid `$AIPIPE_TOKEN`. ```bash curl "https://aipipe.org/admin/token?email=user@example.com" -H "Authorization: $AIPIPE_TOKEN" ``` -------------------------------- ### OpenAI Chat Completions API Response - JSON Source: https://github.com/sanand0/aipipe/blob/main/README.md This JSON object illustrates the typical response structure for an OpenAI chat completions request proxied through AI Pipe. It includes an `id`, `object` type, `model` used, and the `output` containing the assistant's response. ```json { "id": "resp_...", "object": "response", "model": "gpt-4.1-nano-2025-04-14", // ... "output": [ { "role": "assistant", "content": [{ "text": "2 + 2 equals 4." }] // ... } ] } ``` -------------------------------- ### Retrieving All User Usage History (Admin API) - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md This `curl` command retrieves the historical usage data for all users. It requires administrator privileges and an `Authorization` header with a valid `$AIPIPE_TOKEN`. ```bash curl https://aipipe.org/admin/usage -H "Authorization: $AIPIPE_TOKEN" ``` -------------------------------- ### Overwriting User Cost Usage (Admin API) - Bash Source: https://github.com/sanand0/aipipe/blob/main/README.md This `curl` command overwrites the cost usage for a specific user on a given date. It requires administrator privileges, an `Authorization` header, and a `Content-Type: application/json` header with a JSON body containing the user's email, date, and new cost. ```bash curl https://aipipe.org/admin/cost -X POST -H "Authorization: $AIPIPE_TOKEN" \ -H "Content-Type: application/json" \ -d '{"email": "user@example.com", "date": "2025-04-18", "cost": 1.23}' ``` -------------------------------- ### OpenAI Models List API Response - JSON Source: https://github.com/sanand0/aipipe/blob/main/README.md This JSON object represents the expected response when listing OpenAI models through the AI Pipe proxy. It contains a list of model objects, each with an `id`, `object` type, `created` timestamp, and `owned_by` information. ```json { "object": "list", "data": [ { "id": "gpt-4o-audio-preview-2024-12-17", "object": "model", "created": 1734034239, "owned_by": "system" } // ... ] } ``` -------------------------------- ### OpenAI Embeddings API Response - JSON Source: https://github.com/sanand0/aipipe/blob/main/README.md This JSON object shows the expected response format for an OpenAI embeddings request processed by AI Pipe. It includes the `object` type, `data` containing the embedding array, the `model` used, and `usage` statistics for tokens. ```json { "object": "list", "data": [ { "object": "embedding", "index": 0, "embedding": [ 0.010576399, -0.037246477 // ... ] } ], "model": "text-embedding-3-small", "usage": { "prompt_tokens": 8, "total_tokens": 8 } } ``` === COMPLETE CONTENT === This response contains all available snippets from this library. No additional content exists. Do not make further requests.