### Local Development Setup with Uvicorn and Takk
Source: https://docs.takk.cloud/llms/support_ticket
Instructions for setting up a local development environment for the support-ticket system using Uvicorn and Takk. This involves initializing a project, adding dependencies, and starting the local services.
```bash
uv init my-support-project
uv add takk
takk up
```
--------------------------------
### Start Local Takk Environment
Source: https://docs.takk.cloud/llms/local
Command to start the local Takk development environment. This command builds and starts all services, creates local infrastructure, enables hot-reloading, and allocates compute limits.
```bash
takk up
```
--------------------------------
### Run Takk Project Locally
Source: https://docs.takk.cloud/llms/get_started
Starts the Takk project locally, building a Docker image, starting a PostgreSQL database, and launching the FastAPI server. It provides the URL where the application is accessible.
```bash
uv run takk up
```
--------------------------------
### Initialize FastAPI Application and Database Models
Source: https://docs.takk.cloud/llms/support_ticket
Sets up the FastAPI application with a lifespan context manager for database initialization. It ensures all SQLModel metadata is created before the application starts and cleans up resources afterward. This is crucial for the application's first run or when database schema changes are applied.
```python
import logging
from contextlib import asynccontextmanager
from typing import Annotated
from fastapi.responses import RedirectResponse
from sqlmodel import SQLModel, select
from sqlmodel.ext.asyncio.session import AsyncSession
from fastapi import Depends, FastAPI, Form, Response
from starlette.responses import HTMLResponse
from src.models import Ticket, engine, session
from src.workers import background
from src.predict import PredictArgs, predict
logger = logging.getLogger(__name האח)
@asynccontextmanager
async def lifespan(app: FastAPI):
logging.basicConfig(level=logging.INFO)
logger.info("Creating db models")
async with engine().begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
logger.info("Created all models")
yield
app = FastAPI(lifespan=lifespan)
```
--------------------------------
### Install Takk CLI using uv
Source: https://docs.takk.cloud/llms/get_started
Installs the Takk command-line interface tool into your project using the 'uv' package manager. This is the primary tool for managing Takk projects.
```bash
uv add takk
```
--------------------------------
### GET /
Source: https://docs.takk.cloud/llms/support_ticket
Retrieves all existing support tickets and displays them in an HTML table, along with a form for submitting new tickets.
```APIDOC
## GET /
### Description
Fetches all tickets from the database and renders an HTML page containing a form for new ticket submission and a table of existing tickets.
### Method
GET
### Endpoint
/
### Parameters
None
### Request Example
None
### Response
#### Success Response (200)
- **HTMLResponse**: An HTML page displaying the ticket submission form and a table of tickets.
#### Response Example
```html
Submit Ticket
Submit a Suport Ticket
...
```
```
--------------------------------
### Install Takk using uv (Bash)
Source: https://context7_llms
This bash command installs the Takk package using the `uv` package installer. `uv` is a fast Python package installer and resolver, often used as an alternative to pip.
```bash
uv add takk
```
--------------------------------
### Handle Pub/Sub Workers Locally
Source: https://docs.takk.cloud/llms/local
Demonstrates how Takk automatically starts workers for Pub/Sub subscribers in local mode. It shows how to define a subscriber and publish an event to trigger it.
```python
project = Project(
...
verify_email=on_user_created.subscriber(send_verify_email)
)
```
```python
await on_user_created.publish(User(...))
```
--------------------------------
### Configuring Serverless PostgreSQL Instance
Source: https://docs.takk.cloud/llms/resources
Provides an example of configuring a `ServerlessPostgresInstance` with specific parameters such as version, CPU scaling limits, and a list of PostgreSQL extensions to be enabled. This allows for fine-grained control over the provisioned database.
```python
from takk.resources import ServerlessPostgresInstance
default=ServerlessPostgresInstance(
version=16, # PostgreSQL version (only 16 supported)
min_cpus=0, # Minimum CPU units (scales to zero by default)
max_cpus=4, # Maximum CPU units
extensions=["pgvector", "pg_cron"], # PostgreSQL extensions to enable
)
```
--------------------------------
### Display Tickets and Submit New Ticket Form (FastAPI GET Route)
Source: https://docs.takk.cloud/llms/support_ticket
Handles GET requests to the root URL ('/'). It fetches all existing tickets from the database using SQLModel and renders an HTML page. The page displays a table of tickets and includes a form for users to submit new support tickets via a POST request.
```python
@app.get("/")
async def index(
session: Annotated[AsyncSession, Depends(session)]
) -> HTMLResponse:
tickets = await session.exec(select(Ticket))
table = ticket_table(list(tickets.all()))
return HTMLResponse(
content=f"""
Submit Ticket
Submit a Suport Ticket
{table}
"""
)
```
--------------------------------
### Create Project Directory
Source: https://docs.takk.cloud/llms/get_started
Creates a new directory for your Takk project and navigates into it. This sets up the basic file structure for your application.
```bash
mkdir my-first-api
cd my-first-api
```
--------------------------------
### POST /
Source: https://docs.takk.cloud/llms/support_ticket
Submits a new support ticket with email and message, inserts it into the database, and queues it for category prediction.
```APIDOC
## POST /
### Description
Handles the submission of a new support ticket. It takes email and message from the form data, saves the ticket to the database, and initiates a background job to predict its category.
### Method
POST
### Endpoint
/
### Parameters
#### Query Parameters
None
#### Request Body
- **email** (str) - Required - The email address of the submitter.
- **message** (str) - Required - The content of the support ticket message.
### Request Example
```
email=user@example.com
message=My internet is down.
```
### Response
#### Success Response (301)
- **RedirectResponse**: Redirects the user back to the index page after successful submission.
#### Response Example
Redirects to /
```
--------------------------------
### Check Takk CLI Version
Source: https://docs.takk.cloud/llms/get_started
Verifies the installation of the Takk CLI by displaying its current version. This command ensures that Takk is correctly installed and accessible in your environment.
```bash
uv run takk --version
```
--------------------------------
### Project Configuration with Takk
Source: https://docs.takk.cloud/llms/support_ticket
Defines the Takk project structure, including FastAPI application, shared secrets, and background workers. It specifies the main application entry point and necessary configurations for database and queueing infrastructure.
```python
from takk import Project, FastAPIApp
from takk.secrets import SqsConfig
from src.settings import Settings
from src.workers import background
from src import app
project = Project(
name="support",
shared_secrets=[Settings, SqsConfig],
workers=[background],
app=FastAPIApp(app),
)
```
--------------------------------
### POST /train
Source: https://docs.takk.cloud/llms/support_ticket
Manually triggers the retraining of the prediction model.
```APIDOC
## POST /train
### Description
Provides a manual endpoint to trigger the retraining of the machine learning model. This operation is performed asynchronously in the background.
### Method
POST
### Endpoint
/train
### Parameters
None
### Request Example
None
### Response
#### Success Response (200)
- **None**: Indicates the training job has been queued successfully.
#### Response Example
None
```
--------------------------------
### Customizing PostgreSQL Resource Configuration
Source: https://docs.takk.cloud/llms/resources
Shows how to override default resource provisioning by declaring a resource field directly in the `Project`. This example customizes a `ServerlessPostgresInstance` to include the `pgvector` extension.
```python
from takk import Project, Job
from takk.resources import ServerlessPostgresInstance
from my_app.jobs import update_consumption, UpdateConsumptionArgs
project = Project(
name="my-api",
shared_secrets=[AppSettings],
load_consumption=Job(
main_function=update_consumption,
arguments=UpdateConsumptionArgs(),
# cron_schedule="0 * * * *", # Every hour
),
# Override the default serverless PostgreSQL to add extensions.
# Without this line, Takk provisions ServerlessPostgresInstance with default settings.
default=ServerlessPostgresInstance(extensions=["pgvector"]),
)
```
--------------------------------
### Deploy Project to Default Environment
Source: https://docs.takk.cloud/llms/deploy
This command initiates the build, upload, and remote start of your Takk project. By default, deployments are created in the 'test' environment, which is isolated and runs on managed infrastructure.
```bash
takk deploy
```
--------------------------------
### Takk Project Definition
Source: https://docs.takk.cloud/llms/get_started
Defines a Takk project configuration, specifying the application name and the type of server to use. This file tells Takk how to run your FastAPI application.
```python
from takk.models import Project, FastAPIApp
project = Project(
name="my-first-api",
server=FastAPIApp(),
)
```
--------------------------------
### Define Pydantic Settings for Database URL
Source: https://docs.takk.cloud/llms/support_ticket
Defines a `Settings` class using Pydantic's `BaseSettings` to load configuration from environment variables. It specifically includes a `psql_url` field typed as `PostgresDsn` to ensure the database connection string is a valid PostgreSQL Data Source Name.
```python
from pydantic import PostgresDsn
from pydantic_settings import BaseSettings
class Settings(BaseSettings):
psql_url: PostgresDsn
```
--------------------------------
### Trigger Model Retraining (FastAPI POST Route)
Source: https://docs.takk.cloud/llms/support_ticket
Provides a POST endpoint at '/train' to manually trigger the retraining of the prediction model. When accessed, it queues a background job to execute the `train` function from the `src.train` module.
```python
@app.post("/train")
async def train_model() -> None:
from src.train import train, TrainArgs
await background.queue(train, TrainArgs())
```
--------------------------------
### Define SQLModel Database Engine and Session
Source: https://docs.takk.cloud/llms/support_ticket
Configures the asynchronous database engine using SQLModel and SQLAlchemy's `create_async_engine`. It utilizes a cached function to ensure a single engine instance. A dependency function `session` is provided for FastAPI to manage asynchronous database sessions, ensuring proper context management and transaction handling.
```python
from uuid import uuid4, UUID
from sqlmodel import SQLModel, Field
from src.settings import Settings
from functools import lru_cache
from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine
from sqlmodel.ext.asyncio.session import AsyncSession
@lru_cache
def engine() -> AsyncEngine:
settings = Settings() # type: ignore
return create_async_engine(settings.psql_url.encoded_string())
async def session():
async with AsyncSession(engine(), expire_on_commit=False) as session:
yield session
```
--------------------------------
### Train Text Classifier Model with Scikit-learn
Source: https://docs.takk.cloud/llms/support_ticket
Builds a text classifier using scikit-learn's Multinomial Naive Bayes and TF-IDF vectorization. It trains on sample texts, saves the trained pipeline to a pickle file, and requires pydantic and scikit-learn.
```python
import pickle
import logging
from pathlib import Path
from pydantic import BaseModel, Field
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
logger = logging.getLogger(__name__)
class CategoryTypes:
technical = "technical"
billing = "billing"
general = "general"
class TrainArgs(BaseModel):
model_file: str = Field(default="classifier.pkl")
async def train(args: TrainArgs) -> None:
texts = [
("My payment failed", CategoryTypes.billing),
("The website is down", CategoryTypes.technical),
("I can't log in to my account", CategoryTypes.technical),
("How do I update my billing info?", CategoryTypes.billing),
("What are your business hours?", CategoryTypes.general),
("How do I reset my password?", CategoryTypes.technical),
]
pipeline = Pipeline([
("tfidf", TfidfVectorizer()),
("model", MultinomialNB())
])
logger.info("Running train")
pipeline.fit(
[text for text, _ in texts],
[label for _, label in texts],
)
Path(args.model_file).write_bytes(pickle.dumps(pipeline))
```
--------------------------------
### Render Ticket Table HTML Helper
Source: https://docs.takk.cloud/llms/support_ticket
A Python function that takes a list of Ticket objects and generates an HTML table string to display them. It includes a nested helper function to format each row of the table, displaying ticket ID, email, message, and predicted category.
```python
def ticket_table(tickets: list[Ticket]) -> str:
def ticket_row(ticket: Ticket) -> str:
return f"""
{ticket.id}
{ticket.email}
{ticket.message}
{ticket.category}
"""
rows = "
" + "
".join([ticket_row(t) for t in tickets]) + "
"
return f"""
ID
Email
Category
{rows}
"""
```
--------------------------------
### Define Ticket SQLModel
Source: https://docs.takk.cloud/llms/support_ticket
Defines the 'Ticket' data model using SQLModel, which maps to a database table. It includes fields for a unique UUID primary key, email, message, and an optional category. The `table=True` argument indicates that this model should be created as a database table.
```python
class Ticket(SQLModel, table=True):
id: UUID = Field(default_factory=uuid4, primary_key=True)
email: str
message: str
category: str | None = Field(default=None)
```
--------------------------------
### Define MLOps Stack with MLflow, Streamlit, S3, and Worker
Source: https://docs.takk.cloud/llms/project_definition
This example shows a more complex Takk Project for an MLOps stack. It includes an MLflow tracking server, a Streamlit dashboard, S3-backed storage, and a background worker for model training, all configured within a single Python object.
```python
from takk.models import Project, MlflowServer, StreamlitApp, Worker, Compute
from takk.secrets import MlflowConfig, S3StorageConfig
from src.pokemon_app import main
from src.pokemon import load_all_data, train_model
# Define a background worker for async tasks like model training
background = Worker("background")
project = Project(
name="mlops-example",
shared_secrets=[S3StorageConfig], # Available to all components
workers=[background], # Background processes that run continuously
mlflow_server=MlflowServer(
domain_names=[
"example.aligned.codes" # Auto-configured with HTTPS
]
),
is_legendary_app=StreamlitApp(
main, # The entrypoint function for your Streamlit app
secrets=[MlflowConfig], # Only this app gets access to MLflow credentials
compute=Compute(
mb_memory_limit=4 * 1024 # Ensure 4GB RAM for data-heavy workloads
)
)
)
```
--------------------------------
### Handle New Ticket Submission (FastAPI POST Route)
Source: https://docs.takk.cloud/llms/support_ticket
Handles POST requests to the root URL ('/'), which are triggered by the ticket submission form. It takes the email and message from the form data, creates a new Ticket entry in the database, and then queues a background job to predict the ticket's category using the `predict` function. Finally, it redirects the user back to the index page.
```python
@app.post("/")
async def create(
email: Annotated[str, Form()],
message: Annotated[str, Form],
session: Annotated[AsyncSession, Depends(session)]
) -> Response:
model = Ticket(email=email, message=message)
session.add(model)
await session.commit()
await session.refresh(model)
await background.queue(predict, PredictArgs(ticket_id=model.id))
return RedirectResponse(url="/", status_code=301)
```
--------------------------------
### Stop Local Takk Environment
Source: https://docs.takk.cloud/llms/local
Command to stop and remove local Takk containers. Volumes are kept intact unless otherwise configured.
```bash
takk down
```
--------------------------------
### Basic FastAPI Application
Source: https://docs.takk.cloud/llms/get_started
Defines a simple FastAPI application with two endpoints: a root endpoint returning a greeting and a personalized greeting endpoint. This code forms the core of your Takk project's API.
```python
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def hello():
return {"message": "Hello from Takk!"}
@app.get("/greet/{name}")
def greet(name: str):
return {"message": f"Welcome, {name}!"}
```
--------------------------------
### Background Worker Definition with Takk
Source: https://docs.takk.cloud/llms/support_ticket
Defines a background worker named 'background' using Takk. This worker is responsible for handling asynchronous tasks such as training NLP models and predicting category labels.
```python
from takk import Worker
background = Worker("background")
```
--------------------------------
### Deploy FastAPI Server in Takk Project (Python)
Source: https://docs.takk.cloud/llms/network_apps
This example demonstrates deploying a FastAPI application within a Takk Project. It utilizes the FastAPIApp model, allowing for secret integration and custom domain configuration. Dependencies include takk.models and takk.secrets.
```python
from takk.models import Project, FastAPIApp, Worker
from takk.secrets import SlackWebhook
project = Project(
name="fastapi-project",
server=FastAPIApp(
app,
secrets=[SlackWebhook],
domain_names=[
"cloud.aligned.codes",
"www.cloud.aligned.codes",
]
),
)
```
--------------------------------
### Prediction Worker for Ticket Categorization
Source: https://docs.takk.cloud/llms/support_ticket
Loads a trained scikit-learn model to classify ticket messages and updates the category in the database. It automatically triggers training if the model file is not found. Requires pydantic, sqlmodel, and the trained model.
```python
import pickle
from pathlib import Path
from contextlib import asynccontextmanager
import logging
from uuid import UUID
from pydantic import BaseModel, Field
from sqlmodel import select
from src.models import Ticket, session
from src.train import train, TrainArgs
logger = logging.getLogger(__name__)
class PredictArgs(BaseModel):
ticket_id: UUID
model_file: str = Field(default="classifier.pkl")
async def predict(args: PredictArgs) -> None:
model_path = Path(args.model_file)
if not model_path.is_file():
await train(TrainArgs(model_file=args.model_file))
model = pickle.loads(model_path.read_bytes())
async with asynccontextmanager(session)() as sess:
res = await sess.exec(
select(Ticket).where(Ticket.id == args.ticket_id)
)
ticket = res.first()
assert ticket
logger.info(f"Running predict for {ticket}")
label = model.predict([ticket.message])[0]
ticket.category = label
logger.info(f"Predicted '{label}' as the category")
sess.add(ticket)
await sess.commit()
```
--------------------------------
### Stop Takk Project Locally
Source: https://docs.takk.cloud/llms/get_started
Shuts down the Docker containers associated with the Takk project, preserving any created data. This command stops the local development environment.
```bash
uv run takk down
```
--------------------------------
### Define Takk Project Structure
Source: https://docs.takk.cloud/llms/local
Defines the structure of a Takk project, including workers, MLflow server, and Streamlit applications. It specifies compute resources and secrets required for different components.
```python
from takk.models import Compute, Project, MlflowServer, Job, StreamlitApp
from takk.secrets import MlflowConfig, S3StorageConfig
from takk.models import Worker
from src.pokemon_app import main
from src.pokemon import LoadData, TrainConfig, load_all_data, train_model
background = Worker("background")
project = Project(
name="mlops-example",
shared_secrets=[S3StorageConfig],
workers=[priority_queue],
mlflow_server=MlflowServer(
domain_names=[
"example.aligned.codes"
]
),
is_legendary_app=StreamlitApp(
main,
secrets=[MlflowConfig],
compute=Compute(
mb_memory_limit=4 * 1024
)
)
)
```
--------------------------------
### Configure Application Port for NetworkApp
Source: https://docs.takk.cloud/llms/troubleshooting
Specifies the correct network port for your application within a `NetworkApp` definition. This is crucial for ensuring that deployed applications can be accessed externally. The port must match the port your application server binds to.
```python
custom_app=NetworkApp(
command=["python", "server.py"],
port=8000, # Must match the port your server binds to
)
```
--------------------------------
### Deploy MLFlow Server in Takk Project (Python)
Source: https://docs.takk.cloud/llms/network_apps
This example configures an MLFlow tracking server within a Takk Project using the MlflowServer model. It supports custom domain names and integrates with other Takk app types like StreamlitApp. Dependencies include takk.models and takk.secrets.
```python
from takk.models import Project, MlflowServer, Job, StreamlitApp, Worker
from takk.secrets import MlflowConfig
from src.pokemon_app import main
from src.pokemon import LoadData, TrainConfig, load_all_data, train_model
project = Project(
name="mlops-example",
mlflow_server=MlflowServer(
domain_names=[
"example.aligned.codes"
]
),
is_legendary_app=StreamlitApp(
main,
secrets=[MlflowConfig],
)
)
```
--------------------------------
### Increase Job Execution Timeout
Source: https://docs.takk.cloud/llms/troubleshooting
Allows long-running tasks to complete by extending the maximum execution time for jobs. The default timeout is typically 15 minutes. This setting is specified in seconds.
```python
long_task=Job(
process_large_dataset,
timeout=3600, # Allow 1 hour (in seconds)
)
```
--------------------------------
### Combine Resource Configuration and Service URLs
Source: https://docs.takk.cloud/llms/secrets
An example demonstrating the combination of dedicated types, named resources using ResourceRef, and service URLs using ServiceUrl in a Pydantic settings class. This provides a comprehensive configuration for Takk Cloud LLMs. It imports various types from pydantic, pydantic_settings, and takk.secrets.
```python
from typing import Annotated
from pydantic import AnyUrl, NatsDsn, PostgresDsn
from pydantic_settings import BaseSettings
from takk.models import ResourceTags
from takk.secrets import ResourceRef, ServiceUrl, NatsCredsFile
class SharedSettings(BaseSettings):
# Service URLs
app_url: Annotated[AnyUrl, ServiceUrl("my_app", "external")]
# Default NATS resource
nats_url: NatsDsn
nats_creds: NatsCredsFile
# A second NATS cluster
second_nats_url: Annotated[str, ResourceRef(ResourceTags.nats_dsn, "other_nats")]
second_nats_creds: Annotated[str, ResourceRef(ResourceTags.nats_creds_file, "other_nats")]
```
--------------------------------
### Increase Compute Resources for FastAPIApp
Source: https://docs.takk.cloud/llms/troubleshooting
When services are slow or unresponsive, insufficient compute resources might be the cause. This snippet shows how to increase the CPU and memory limits for a FastAPIApp by modifying the `compute` attribute in `project.py`. Ensure you redeploy after making these changes.
```python
server=FastAPIApp(
app,
compute=Compute(
mvcpu_limit=2000, # Increase from 1000
mb_memory_limit=2048, # Increase from 1024
)
)
```
--------------------------------
### Annotate Secrets with Resource Tags in Pydantic
Source: https://docs.takk.cloud/llms/secrets
This example shows how to use Pydantic's `Annotated` type with Takk's `ResourceTags` enum to specify the type of infrastructure a secret represents. This allows Takk to intelligently provision and manage resources like S3 keys or NATS credentials based on these annotations.
```python
from typing import Annotated
from takk.models import ResourceTags
from pydantic_settings import BaseSettings
class MyConfig(BaseSettings):
s3_key: Annotated[str, ResourceTags.s3_secret_key]
nats_creds: Annotated[str, ResourceTags.nats_creds_file]
```
--------------------------------
### Declare Secrets for Application Components
Source: https://docs.takk.cloud/llms/troubleshooting
Ensures secrets are available to your application components by explicitly listing them in the project definition. Secrets must be declared in `shared_secrets` for all components or in the component's `secrets` parameter for specific access. This prevents `None` values or validation errors when accessing secrets.
```python
project = Project(
name="my-project",
shared_secrets=[SharedSecrets], # Available to all components
server=FastAPIApp(
app,
secrets=[StripeConfig], # Only available to this app
)
)
```
--------------------------------
### Scale Takk Worker Instances
Source: https://docs.takk.cloud/llms/troubleshooting
If a worker queue is growing, it indicates that workers cannot process messages fast enough. This Python code demonstrates how to increase the `max_scale` parameter for a Takk worker, allowing more instances to run in parallel and handle increased load. This helps in distributing the workload and preventing backlog.
```python
project = Project(
workers=[priority_queue],
priority_queue_worker=Worker(
"priority_queue",
max_scale=5, # Allow up to 5 worker instances
)
)
```
--------------------------------
### Deploy Takk Project Locally or to Cloud (Bash)
Source: https://context7_llms
These bash commands demonstrate how to deploy a Takk project. `takk up` is used for local development with automatic hot reloading, typically for services like uvicorn. `takk deploy --env qa` is used for deploying the project to a cloud environment, specifying the target environment as 'qa'.
```bash
uv run takk up # Local development with automatic hot reloading for uvicorn
```
```bash
uv run takk deploy --env qa # Cloud deployment
```
--------------------------------
### Automatic PostgreSQL Provisioning with Pydantic
Source: https://docs.takk.cloud/llms/resources
Demonstrates how Takk automatically provisions a serverless PostgreSQL instance when a field is declared with the `PostgresDsn` type hint in a Pydantic `BaseSettings` class. No explicit configuration is needed for basic provisioning.
```python
from pydantic import PostgresDsn
from pydantic_settings import BaseSettings
class AppSettings(BaseSettings):
psql_uri: PostgresDsn # Automatically provisions a serverless PostgreSQL cluster
```
--------------------------------
### Provision MongoDB Instance with Takk
Source: https://docs.takk.cloud/llms/resources
Sets up a MongoDB instance, automatically provisioned when `MongoDsn` is detected. Allows configuration of MongoDB version, number of nodes (for replica sets), and resource allocation.
```python
from takk.resources import MongoDBInstance
documents=MongoDBInstance(
version="7.0", # MongoDB version
number_of_nodes=1, # 1 (standalone) or 3 (replica set)
min_vcpus=0,
min_gb_ram=16,
)
```
--------------------------------
### Define Takk Project with Network App and Scheduled Job (Python)
Source: https://context7_llms
This Python code defines a Takk `Project` including application settings, a network application (FastAPI via uvicorn), a scheduled background job, and a serverless PostgreSQL instance with extensions. It uses Pydantic for settings management and Takk's abstractions for defining application components and infrastructure. Resources like PostgreSQL are provisioned automatically if not found in environment variables.
```python
from pydantic import PostgresDsn, SecretStr
from pydantic_settings import BaseSettings
from takk import Project, NetworkApp, Job, prod_value
from takk.secrets import AiToken, AiBaseAPI, AiBaseUrl
from takk.resources import ServerlessPostgresInstance
from app.jobs import materialize_job, MaterializeArgs
class AppSettings(BaseSettings):
# Declaring PostgresDsn automatically provisions a serverless PostgreSQL cluster
# if no value is provided in .env or environment variables
psql_uri: PostgresDsn
# A single connection serves all AI model types: Chat, Vision, Embedding, AudioTranscriber.
# AiBaseAPI is for OpenAI-compatible endpoints (base URL includes /v1).
# AiBaseUrl is for Anthropic-compatible endpoints (base URL only, no /v1).
# AiToken is the API key for either provider.
ai_api: AiBaseAPI
ai_token: AiToken
project = Project(
name="my-api",
shared_secrets=[AppSettings],
# A uvicorn app named `my_app` exposing port 8000
my_app=NetworkApp(
command=["/bin/bash", "-c", "uvicorn src.app:app --host 0.0.0.0 --port 8000"],
port=8000,
docker_image=None # None means using the managed Python image
),
# A scheduled job named `background_job`
background_job=Job(
materialize_job,
cron_schedule=prod_value("0 3 * * *", otherwise=None), # Runs daily at 3 AM in prod
arguments=MaterializeArgs(),
),
# Override the default serverless PostgreSQL to enable extensions.
# Without this, Takk provisions a ServerlessPostgresInstance with default settings.
default=ServerlessPostgresInstance(extensions=["pgvector"]),
)
```
--------------------------------
### Defining Multiple PostgreSQL Resources
Source: https://docs.takk.cloud/llms/resources
Illustrates how to define multiple, independent PostgreSQL resources within a Takk `Project` by using distinct names for each resource. This allows for different configurations, such as enabling specific extensions for different databases.
```python
from takk import Project
from takk.resources import ServerlessPostgresInstance
project = Project(
name="my-api",
shared_secrets=[AppSettings],
# Primary database with vector search support
default=ServerlessPostgresInstance(extensions=["pgvector"]),
# Secondary database with scheduled jobs support
other_psql=ServerlessPostgresInstance(extensions=["pg_cron"]),
)
```
--------------------------------
### Configure AI/LLM Settings with Takk (Multiple Providers)
Source: https://docs.takk.cloud/llms/resources
Illustrates how to configure settings for multiple AI/LLM providers using `ResourceRef` for distinct naming. This allows specifying different endpoints and tokens for different services, such as a default provider and a separate embedding service.
```python
from typing import Annotated
from pydantic import AnyUrl, SecretStr
from pydantic_settings import BaseSettings
from takk.secrets import AiToken, AiBaseAPI, ResourceRef, ResourceTags
class AISettings(BaseSettings):
# Default provider
ai_api: AiBaseAPI
ai_token: AiToken
# Second provider (only needed when connecting to multiple LLM services)
embed_api: Annotated[AnyUrl, ResourceRef(ResourceTags.llm_base_api, name="embed")]
embed_token: Annotated[SecretStr, ResourceRef(ResourceTags.llm_token, name="embed")]
```
--------------------------------
### Deploy Project to a Specific Environment
Source: https://docs.takk.cloud/llms/deploy
This command deploys your Takk project to a specified environment using the `--env` flag. You can choose environments like 'prod' or any other custom environment, each with its own dedicated infrastructure, secrets, and services.
```bash
takk deploy --env prod
```
--------------------------------
### Define a Basic Data Loading Job in Python
Source: https://docs.takk.cloud/llms/jobs
This Python snippet demonstrates how to define a basic Takk Job for loading data. It specifies the function to execute (`load_all_data`), its arguments (`LoadData()`), and accessible secrets (`S3StorageConfig`).
```python
from takk.models import Project, Job
from takk.secrets import S3StorageConfig
project = Project(
name="mlops-example",
shared_secrets=[S3StorageConfig],
load_pokemon_data=Job(
load_all_data,
arguments=LoadData()
),
)
```
--------------------------------
### Referencing Specific PostgreSQL Resources
Source: https://docs.takk.cloud/llms/resources
Demonstrates how to use `ResourceRef` with `Annotated` types to explicitly link settings fields to specific PostgreSQL resources when multiple resources of the same type are defined. This ensures that the correct database connection is used for each setting.
```python
from typing import Annotated
from pydantic import PostgresDsn
from pydantic_settings import BaseSettings
from takk.secrets import ResourceRef, ResourceTags
class AppSettings(BaseSettings):
# Resolves to the "default" PostgreSQL resource
psql_uri: PostgresDsn
# Resolves to the "other_psql" PostgreSQL resource
analytics_uri: Annotated[str, ResourceRef(ResourceTags.psql_dsn, name="other_psql")]
```
--------------------------------
### Define FastAPI App with Secrets and Domains
Source: https://docs.takk.cloud/llms/project_definition
This snippet demonstrates how to define a minimal Takk Project for a FastAPI application. It configures the server with a FastAPIApp, includes Slack notifications via secrets, and sets up domain names with automatic TLS.
```python
from takk.models import Project, FastAPIApp
from takk.secrets import SlackWebhook
project = Project(
name="fastapi-project",
server=FastAPIApp(
app, # Your FastAPI app instance
secrets=[SlackWebhook],
domain_names=[
"cloud.aligned.codes",
"www.cloud.aligned.codes",
],
),
)
```
--------------------------------
### Provision Redis Instance with Takk
Source: https://docs.takk.cloud/llms/resources
Configures a Redis instance for caching and message queuing. The instance is provisioned automatically when `RedisDsn` is detected in the settings. Supports specifying Redis version, number of nodes, and resource allocation.
```python
from takk.resources import RedisInstance
cache=RedisInstance(
version="7.2.11", # Redis version
number_of_nodes=1, # Number of nodes
min_vcpus=0,
min_gb_ram=1,
)
```
--------------------------------
### Configure Multiple NATS Resources with ResourceRef
Source: https://docs.takk.cloud/llms/secrets
Demonstrates how to configure multiple instances of the same resource type, such as NATS clusters, by using ResourceRef with distinct names. This allows Takk to manage each instance separately. It requires importing ResourceTags, ResourceRef, and NatsCredsFile from takk.secrets and pydantic.NatsDsn.
```python
from typing import Annotated
from takk.models import ResourceTags
from takk.secrets import ResourceRef, NatsCredsFile
from pydantic import NatsDsn
from pydantic_settings import BaseSettings
class MultiResourceConfig(BaseSettings):
# Default NATS cluster (equivalent to ResourceRef(ResourceTags.nats_dsn, "default"))
nats_url: NatsDsn
nats_creds: NatsCredsFile
# A second NATS cluster named "other_nats"
second_nats_url: Annotated[str, ResourceRef(ResourceTags.nats_dsn, "other_nats")]
second_nats_creds: Annotated[str, ResourceRef(ResourceTags.nats_creds_file, "other_nats")]
```
--------------------------------
### Configure Compute Resources for a Job in Python
Source: https://docs.takk.cloud/llms/jobs
This Python snippet illustrates how to define the compute resources (CPU and memory) for a Takk Job execution environment. It's useful for managing resource allocation for demanding tasks like model training.
```python
compute=Compute(
mvcpu_limit=1000,
mb_memory_limit=2048,
)
```
--------------------------------
### Define InfraConfig with Takk Shorthand Types in Python
Source: https://docs.takk.cloud/llms/secrets
This Python code snippet demonstrates how to define a `BaseSettings` class named `InfraConfig` using Takk's shorthand types for various cloud resources like PostgreSQL, NATS, S3, and Loki. It leverages Pydantic for data validation and settings management.
```python
from takk.secrets import S3SecretKey, LokiToken, NatsCredsFile
from pydantic import PostgresDsn, NatsDsn
from pydantic_settings import BaseSettings
class InfraConfig(BaseSettings):
psql_url: PostgresDsn
nats_url: NatsDsn
nats_creds: NatsCredsFile
s3_key: S3SecretKey
loki_token: LokiToken
```
--------------------------------
### Use Secrets in Takk Jobs with Python
Source: https://docs.takk.cloud/llms/jobs
This Python snippet shows how to securely provide credentials or configuration values to a Takk Job by listing secret settings classes. These secrets are injected into the job's runtime environment.
```python
secrets=[MlflowConfig]
```
--------------------------------
### Publish Event to PubSub Topic
Source: https://docs.takk.cloud/llms/pubsub
Publishes an event instance to a defined PubSub topic. This action triggers all registered subscribers for that topic to execute asynchronously. The publishing function does not wait for subscribers to complete.
```python
await on_user_created.publish(
CreatedUser(
id=1,
name="Ash Ketchum",
email="ash@example.com",
)
)
```
--------------------------------
### Register Subscribers with Project Configuration
Source: https://docs.takk.cloud/llms/pubsub
Registers subscriber functions with a Takk Project, optionally configuring secrets or compute resources required by the subscribers. This step links the event handling logic to the project's overall structure.
```python
from takk.models import Project, Compute
from takk.secrets import BaseSettings, SecretStr
class SendGridConfig(BaseSettings):
api_token: SecretStr
project = Project(
name="my-project",
verify_email=on_user_created.subscriber(
send_verify_email,
secrets=[SendGridConfig]
),
predict_cluster=on_user_created.subscriber(
predict_cluster,
compute=Compute(
mvcpu_limit=1000,
mb_memory_limit=1024
)
)
)
```
--------------------------------
### NetworkApp Class Schema (Python)
Source: https://docs.takk.cloud/llms/network_apps
This snippet presents the Python dataclass schema for the NetworkApp, outlining all available configuration parameters. These include command, port, description, environments, secrets, contacts, health check, compute requirements, scaling, HTTPS settings, domain names, and Docker image.
```python
@dataclass
class NetworkApp:
command: list[str]
port: int
description: str | None = None
environments: dict[str, str] | None = None
secrets: list[type[BaseSettings]] | None = None
contacts: list[Contact] | Contact | None = None
health_check: str | None = None
compute: Compute = field(default_factory=Compute)
min_scale: int = 0
max_scale: int = 1
https_only: bool = True
domain_names: list[str] | str | None = None
tags: list[str] | None = None
docker_image: str | None = None
```
--------------------------------
### Configure AI/LLM Settings with Takk (Single Provider)
Source: https://docs.takk.cloud/llms/resources
Defines settings for connecting to a single AI/LLM provider using `AiBaseAPI` for OpenAI-compatible endpoints and `AiToken` for authentication. This configuration supports Chat, Vision, Embedding, and AudioTranscriber models.
```python
from pydantic_settings import BaseSettings
from takk.secrets import AiToken, AiBaseAPI
class AISettings(BaseSettings):
# One connection serves all model types: Chat, Vision, Embedding, AudioTranscriber
ai_api: AiBaseAPI # OpenAI-compatible base URL (includes /v1)
ai_token: AiToken
```
--------------------------------
### Provision Dedicated PostgreSQL Instance with Takk
Source: https://docs.takk.cloud/llms/resources
Defines a dedicated PostgreSQL instance with specified version, compute, memory, and availability settings. Supports automated backups and high availability configurations.
```python
from takk.resources import PostgresInstance
primary_db=PostgresInstance(
version=17, # PostgreSQL version: 14, 15, 16, or 17
min_vcpus=2, # Minimum vCPUs
min_gb_ram=4, # Minimum RAM in GB
number_of_nodes=2, # 1 (standalone) or 2 (high availability)
k_iops=15, # IOPS tier: 5 or 15
is_backup_disabled=False, # Enable automated backups
)
```
--------------------------------
### Reference Service URLs with ServiceUrl
Source: https://docs.takk.cloud/llms/secrets
Shows how to reference the URLs of other services within your project using the ServiceUrl annotation. This is useful for inter-service communication or exposing public URLs. It requires importing Annotated, AnyUrl from pydantic, BaseSettings from pydantic_settings, and ServiceUrl from takk.secrets.
```python
from typing import Annotated
from pydantic import AnyUrl
from pydantic_settings import BaseSettings
from takk.secrets import ServiceUrl
class ServiceUrls(BaseSettings):
# The public URL for "my_app"
app_url: Annotated[AnyUrl, ServiceUrl("my_app", "external")]
# The internal (cluster-local) URL for "my_app"
internal_app_url: Annotated[AnyUrl, ServiceUrl("my_app", "internal")]
```
--------------------------------
### Define PubSub Topic with Subject
Source: https://docs.takk.cloud/llms/pubsub
Creates a PubSub topic instance, associating it with a specific event model and a subject name. The subject acts as the channel for event distribution, ensuring all subscribers listening to this subject receive published events.
```python
from takk.pubsub import PubSub
on_user_created = PubSub(CreatedUser, subject="on_user_created")
```
--------------------------------
### Define Application Secrets with Pydantic BaseSettings
Source: https://docs.takk.cloud/llms/secrets
This snippet demonstrates how to define application secrets using Pydantic's BaseSettings class. It specifies required secrets like API keys and database connection URLs, ensuring type safety and clear configuration. Takk uses these definitions to manage secrets securely across environments.
```python
from pydantic import SecretStr
from pydantic_settings import BaseSettings
class SharedSecrets(BaseSettings):
openai_api_key: SecretStr
stripe_secret_key: SecretStr
```
```python
from pydantic import PostgresDns, SecretStr
from pydantic_settings import BaseSettings
class SharedSecrets(BaseSettings):
openai_api_key: SecretStr
psql_url: PostgresDns
```
--------------------------------
### Schedule a Python Job Using Cron Expressions
Source: https://docs.takk.cloud/llms/jobs
This Python snippet shows how to configure a Takk Job to run on a recurring schedule using a cron expression. It includes setting the cron schedule, job arguments, specific secrets, and compute resources.
```python
from takk.models import Compute, Project, Job
from takk.secrets import MlflowConfig, S3StorageConfig
project = Project(
name="mlops-example",
shared_secrets=[S3StorageConfig],
train_pokemon_model=Job(
train_model,
cron_schedule="0 3 * * *", # Runs daily at 3 AM
arguments=TrainConfig(),
secrets=[MlflowConfig],
compute=Compute(
mvcpu_limit=1000,
mb_memory_limit=1024
)
),
)
```
--------------------------------
### Deploy Streamlit Server in Takk Project (Python)
Source: https://docs.takk.cloud/llms/network_apps
This snippet shows how to deploy a Streamlit application using Takk's StreamlitApp model. It supports secret configuration and integration with background workers. Dependencies include takk.models and takk.secrets.
```python
from takk.models import Project, MlflowServer, Job, StreamlitApp, Worker
from takk.secrets import MlflowConfig, S3StorageConfig
from src.pokemon_app import main
from src.pokemon import LoadData, TrainConfig, load_all_data, train_model
project = Project(
name="streamlit-apps",
is_legendary_app=StreamlitApp(
main,
secrets=[MlflowConfig],
)
)
```
--------------------------------
### Define Custom NetworkApp in Takk Project (Python)
Source: https://docs.takk.cloud/llms/network_apps
This snippet shows how to define a custom NetworkApp within a Takk Project configuration. It specifies the command to run and the port to expose. Dependencies include the takk.models and takk.secrets modules.
```python
from takk.models import Project, FastAPIApp, Worker
from takk.secrets import SlackWebhook
project = Project(
name="my-custom-server",
custom_network_app=NetworkApp(
command=["/bin/bash", "-c", "uv run main.py"],
port=8000,
),
)
```
--------------------------------
### Queue Work for a Takk Worker (Python)
Source: https://docs.takk.cloud/llms/queues
Shows how to enqueue a task for execution by a defined worker. The task consists of the function to run and its corresponding request payload instance. The caller does not wait for completion.
```python
await priority_queue.queue(
send_email,
SendEmail(email="ola@nordmann.no", content="...")
)
```
--------------------------------
### Define a Takk Worker and Add to Project (Python)
Source: https://docs.takk.cloud/llms/queues
This snippet demonstrates how to instantiate a Worker with a specific queue name and include it in the project's worker list for deployment and task processing.
```python
from takk.models import Worker
priority_queue = Worker("priority_queue")
project = Project(
name="mlops-example",
workers=[priority_queue],
)
```
--------------------------------
### Define Subscriber Functions
Source: https://docs.takk.cloud/llms/pubsub
Defines functions that will act as subscribers to PubSub events. These functions accept the event model as an argument and contain the logic to be executed when a message is published to the corresponding subject.
```python
def send_verify_email(user: CreatedUser) -> None:
...
def predict_cluster(user: CreatedUser) -> None:
...
```
--------------------------------
### Register Project Secrets with Takk
Source: https://docs.takk.cloud/llms/secrets
This code illustrates how to register your Pydantic settings classes with Takk to manage secrets. By including the secrets class in the `shared_secrets` list of the `Project` definition, Takk automatically handles the provisioning and assignment of these secrets for your application.
```python
project = Project(
name="my-project",
shared_secrets=[ObjectStorageConfig, SharedSecrets],
server=FastAPIApp(app)
)
```
--------------------------------
### Define Typed Event Model with Pydantic
Source: https://docs.takk.cloud/llms/pubsub
Defines a structured and validated event model using Pydantic's BaseModel. This ensures that all messages published to a PubSub topic adhere to a specific schema, facilitating robust event handling.
```python
from pydantic import BaseModel
class CreatedUser(BaseModel):
id: int
name: str
email: str
```
--------------------------------
### Define Typed Request Model for Worker Tasks (Python)
Source: https://docs.takk.cloud/llms/queues
Illustrates defining a request model using Pydantic's BaseModel for structured data passed to worker functions. The worker function can be synchronous or asynchronous.
```python
from pydantic import BaseModel
class SendEmail(BaseModel):
email: str
content: str
async def send_email(request: SendEmail) -> None:
# Implementation
...
```
=== COMPLETE CONTENT === This response contains all available snippets from this library. No additional content exists. Do not make further requests.