Try Live
Add Docs
Rankings
Pricing
Docs
Install
Install
Docs
Pricing
More...
More...
Try Live
Rankings
Enterprise
Create API Key
Add Docs
Anthropic Courses
https://github.com/anthropics/courses
Admin
Anthropic's educational courses provide a comprehensive guide to working with the Claude SDK, prompt
...
Tokens:
281,228
Snippets:
588
Trust Score:
8.8
Update:
4 weeks ago
Context
Skills
Chat
Benchmark
91.2
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Anthropic Courses Anthropic Courses is an official educational repository containing comprehensive tutorials for working with Claude AI models and the Anthropic API. The repository covers five progressive courses: API fundamentals, prompt engineering, real-world prompting, prompt evaluations, and tool use. These courses teach developers how to effectively integrate Claude into their applications through hands-on Jupyter notebook tutorials with practical examples. The courses are designed to be completed in sequence, starting with basic API interactions and progressing through advanced topics like multi-turn conversations, streaming responses, vision capabilities, and function calling (tool use). Each course builds on concepts from previous lessons, providing a structured learning path from beginner to advanced Claude integration patterns. ## Getting Started with the Anthropic SDK Install the Anthropic Python SDK and make your first API request to Claude. The SDK requires Python 3.7.1+ and automatically reads the API key from the `ANTHROPIC_API_KEY` environment variable. ```python # Install the SDK # pip install anthropic python-dotenv from dotenv import load_dotenv from anthropic import Anthropic # Load API key from .env file (ANTHROPIC_API_KEY=your-key-here) load_dotenv() # Client automatically uses ANTHROPIC_API_KEY environment variable client = Anthropic() # Make a basic request response = client.messages.create( model="claude-3-haiku-20240307", max_tokens=1000, messages=[ {"role": "user", "content": "Hi there! Please write me a haiku about a pet chicken"} ] ) print(response.content[0].text) # Output: Feathered friend clucking, # Scratching in the dirt all day, # Loyal pet chicken. ``` ## Messages API Format The messages API uses a conversation format with alternating user and assistant messages. Each message requires a `role` ("user" or "assistant") and `content` field. Messages must start with a user message and alternate between roles. ```python from anthropic import Anthropic client = Anthropic() # Single message request response = client.messages.create( model="claude-3-haiku-20240307", max_tokens=1000, messages=[ {"role": "user", "content": "What flavors are used in Dr. Pepper?"} ] ) # Access the response text print(response.content[0].text) # Response object contains: # - id: unique identifier # - content: list of content blocks with generated text # - model: the model used # - stop_reason: why generation stopped ("end_turn", "max_tokens", "tool_use") # - usage: token counts (input_tokens, output_tokens) # Multi-turn conversation with history messages = [ {"role": "user", "content": "Hello Claude! How are you today?"}, {"role": "assistant", "content": "Hello! I'm doing well, thank you. How can I assist you?"}, {"role": "user", "content": "Can you tell me a fun fact about ferrets?"}, ] response = client.messages.create( model="claude-3-haiku-20240307", max_tokens=500, messages=messages ) print(response.content[0].text) ``` ## Few-Shot Prompting with Messages Use conversation history to provide examples that guide Claude's output format. This technique is especially useful for standardizing response formats like sentiment analysis or classification tasks. ```python from anthropic import Anthropic client = Anthropic() # Provide examples to standardize output format response = client.messages.create( model="claude-3-haiku-20240307", max_tokens=500, messages=[ {"role": "user", "content": "Unpopular opinion: Pickles are disgusting. Don't @ me"}, {"role": "assistant", "content": "NEGATIVE"}, {"role": "user", "content": "I think my love for pickles might be getting out of hand. I just bought a pickle-shaped pool float"}, {"role": "assistant", "content": "POSITIVE"}, {"role": "user", "content": "Seriously why would anyone ever eat a pickle? Those things are nasty!"}, {"role": "assistant", "content": "NEGATIVE"}, # New tweet to analyze {"role": "user", "content": "Just tried the new spicy pickles from @PickleCo, and my taste buds are doing a happy dance!"}, ] ) print(response.content[0].text) # Output: POSITIVE ``` ## Model Parameters: max_tokens, temperature, stop_sequences Control Claude's output with parameters for token limits, randomness, and stop conditions. Use lower temperature (0) for deterministic analytical tasks and higher temperature (1) for creative tasks. ```python from anthropic import Anthropic client = Anthropic() # max_tokens: Maximum tokens Claude can generate (required) # Low max_tokens truncates output truncated = client.messages.create( model="claude-3-haiku-20240307", max_tokens=10, messages=[{"role": "user", "content": "Write me a poem"}] ) print(truncated.stop_reason) # "max_tokens" - output was cut off # temperature: Controls randomness (0-1, default 1) # temperature=0 produces more deterministic outputs response = client.messages.create( model="claude-3-haiku-20240307", max_tokens=100, temperature=0, messages=[{"role": "user", "content": "Come up with a name for an alien planet. Respond with a single word."}] ) # With temperature=0, running multiple times gives consistent results # stop_sequences: Stop generation when these strings are encountered response = client.messages.create( model="claude-3-haiku-20240307", max_tokens=500, stop_sequences=["}"], messages=[{"role": "user", "content": "Generate a JSON object representing a person with name, email, phone."}] ) print(response.stop_reason) # "stop_sequence" print(response.stop_sequence) # "}" # Note: The stop sequence itself is NOT included in output ``` ## System Prompts Use system prompts to set context, define Claude's role, and establish behavior guidelines for the entire conversation. ```python from anthropic import Anthropic client = Anthropic() # System prompt defines Claude's role and behavior response = client.messages.create( model="claude-3-haiku-20240307", max_tokens=1000, system="You are a helpful foreign language tutor that always responds in French.", messages=[ {"role": "user", "content": "Hey there, how are you?!"} ] ) print(response.content[0].text) # Output: Bonjour ! Je suis ravi de vous rencontrer. Comment allez-vous aujourd'hui ? # Combining system prompt with stop_sequences for controlled output def generate_questions(topic, num_questions=3): response = client.messages.create( model="claude-3-haiku-20240307", max_tokens=500, system=f"You are an expert on {topic}. Generate thought-provoking questions about this topic.", messages=[ {"role": "user", "content": f"Generate {num_questions} questions about {topic} as a numbered list."} ], stop_sequences=[f"{num_questions+1}."] # Stop before generating extra questions ) print(response.content[0].text) generate_questions(topic="free will", num_questions=3) ``` ## Streaming Responses Use streaming to receive content as it's generated, dramatically reducing time-to-first-token for better user experience in interactive applications. ```python from anthropic import Anthropic client = Anthropic() # Basic streaming with stream=True stream = client.messages.create( model="claude-3-haiku-20240307", max_tokens=1000, stream=True, messages=[{"role": "user", "content": "How do large language models work?"}] ) # Iterate over stream events for event in stream: if event.type == "content_block_delta": print(event.delta.text, end="", flush=True) elif event.type == "message_start": print(f"Input tokens: {event.message.usage.input_tokens}") elif event.type == "message_delta": print(f"\nOutput tokens: {event.usage.output_tokens}") # Async streaming with helpers from anthropic import AsyncAnthropic async_client = AsyncAnthropic() async def streaming_with_helpers(): async with async_client.messages.stream( max_tokens=1024, model="claude-3-haiku-20240307", messages=[{"role": "user", "content": "Write me a sonnet about orchids"}] ) as stream: async for text in stream.text_stream: print(text, end="", flush=True) # Get complete message after streaming final_message = await stream.get_final_message() print(f"\nTotal output tokens: {final_message.usage.output_tokens}") # Run with: await streaming_with_helpers() ``` ## Vision: Image Analysis Claude 3 models can analyze images provided as base64-encoded data. Support includes JPEG, PNG, GIF, and WebP formats. ```python import base64 import mimetypes from anthropic import Anthropic client = Anthropic() # Helper function to create image message blocks def create_image_message(image_path): with open(image_path, "rb") as image_file: binary_data = image_file.read() base64_string = base64.b64encode(binary_data).decode('utf-8') mime_type, _ = mimetypes.guess_type(image_path) return { "type": "image", "source": { "type": "base64", "media_type": mime_type, "data": base64_string } } # Single image with text prompt messages = [ { "role": "user", "content": [ create_image_message("./photo.png"), {"type": "text", "text": "What is shown in this image?"} ] } ] response = client.messages.create( model="claude-3-5-sonnet-20240620", max_tokens=2048, messages=messages ) print(response.content[0].text) # Multiple images with labels (improves accuracy) messages = [ { "role": "user", "content": [ {"type": "text", "text": "Image 1:"}, create_image_message('./animal1.png'), {"type": "text", "text": "Image 2:"}, create_image_message('./animal2.png'), {"type": "text", "text": "What are these animals?"} ] } ] response = client.messages.create( model="claude-3-5-sonnet-20240620", max_tokens=2048, messages=messages ) print(response.content[0].text) ``` ## Vision: Images from URLs Fetch and encode images from URLs for analysis with Claude. ```python import base64 import httpx from anthropic import Anthropic client = Anthropic() def get_image_from_url(image_url): response = httpx.get(image_url) image_content = response.content # Determine media type from URL extension extension = image_url.split(".")[-1].lower() media_types = {"jpg": "image/jpeg", "jpeg": "image/jpeg", "png": "image/png", "gif": "image/gif"} image_media_type = media_types.get(extension, "image/jpeg") return { "type": "image", "source": { "type": "base64", "media_type": image_media_type, "data": base64.b64encode(image_content).decode("utf-8"), }, } # Analyze image from URL url = "https://upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Church_of_light.jpg/1599px-Church_of_light.jpg" messages = [ { "role": "user", "content": [ get_image_from_url(url), {"type": "text", "text": "Describe this image."} ], } ] response = client.messages.create( model="claude-3-5-sonnet-20240620", max_tokens=2048, messages=messages ) print(response.content[0].text) ``` ## Tool Use: Defining Tools Define tools with JSON Schema to extend Claude's capabilities. Tools enable Claude to request external function calls for tasks like calculations, API calls, or database queries. ```python from anthropic import Anthropic client = Anthropic() # Define a calculator tool calculator_tool = { "name": "calculator", "description": "A simple calculator that performs basic arithmetic operations.", "input_schema": { "type": "object", "properties": { "operation": { "type": "string", "enum": ["add", "subtract", "multiply", "divide"], "description": "The arithmetic operation to perform." }, "operand1": { "type": "number", "description": "The first operand." }, "operand2": { "type": "number", "description": "The second operand." } }, "required": ["operation", "operand1", "operand2"] } } # Actual calculator function def calculator(operation, operand1, operand2): operations = { "add": lambda a, b: a + b, "subtract": lambda a, b: a - b, "multiply": lambda a, b: a * b, "divide": lambda a, b: a / b if b != 0 else "Error: Division by zero" } return operations[operation](operand1, operand2) # Tell Claude about the tool response = client.messages.create( model="claude-3-haiku-20240307", max_tokens=300, system="You have access to tools, but only use them when necessary.", tools=[calculator_tool], messages=[{"role": "user", "content": "Multiply 1984135 by 9343116"}] ) # Check if Claude wants to use the tool if response.stop_reason == "tool_use": tool_use = response.content[-1] print(f"Tool: {tool_use.name}") print(f"Input: {tool_use.input}") # Execute the tool result = calculator(**tool_use.input) print(f"Result: {result}") # 18538003464660 ``` ## Tool Use: Complete Workflow Implement the full tool use workflow: send request, handle tool call, return results, get final response. ```python import wikipedia from anthropic import Anthropic client = Anthropic() # Wikipedia search tool def get_article(search_term): results = wikipedia.search(search_term) page = wikipedia.page(results[0], auto_suggest=False) return page.content article_search_tool = { "name": "get_article", "description": "Retrieve an up-to-date Wikipedia article.", "input_schema": { "type": "object", "properties": { "search_term": { "type": "string", "description": "The search term to find a Wikipedia article" } }, "required": ["search_term"] } } def answer_question(question): system_prompt = """Answer questions using the get_article tool only when you need current information you weren't trained on. Otherwise, answer directly.""" messages = [{"role": "user", "content": question}] response = client.messages.create( model="claude-3-sonnet-20240229", system=system_prompt, messages=messages, max_tokens=1000, tools=[article_search_tool] ) # Handle tool use if response.stop_reason == "tool_use": tool_use = response.content[-1] messages.append({"role": "assistant", "content": response.content}) if tool_use.name == "get_article": # Execute the tool wiki_result = get_article(tool_use.input["search_term"]) # Return tool result to Claude messages.append({ "role": "user", "content": [{ "type": "tool_result", "tool_use_id": tool_use.id, "content": wiki_result }] }) # Get final response response = client.messages.create( model="claude-3-sonnet-20240229", system=system_prompt, messages=messages, max_tokens=1000, tools=[article_search_tool] ) return response.content[0].text # Example usage print(answer_question("Who won the 2024 Masters Tournament?")) # Output: Scottie Scheffler won the 2024 Masters Tournament... ``` ## Building a Multi-Turn Chatbot Create an interactive chatbot that maintains conversation history across turns. ```python from anthropic import Anthropic client = Anthropic() def chat_with_claude(): print("Welcome to the Claude Chatbot! Type 'quit' to exit.") conversation_history = [] while True: user_input = input("You: ") if user_input.lower() == 'quit': print("Goodbye!") break conversation_history.append({"role": "user", "content": user_input}) response = client.messages.create( model="claude-3-haiku-20240307", messages=conversation_history, max_tokens=500 ) assistant_response = response.content[0].text print(f"Claude: {assistant_response}") conversation_history.append({"role": "assistant", "content": assistant_response}) # Streaming chatbot version def streaming_chat(): conversation = [] while True: user_input = input("You: ") if user_input.lower() == 'quit': break conversation.append({"role": "user", "content": user_input}) print("Claude: ", end="", flush=True) stream = client.messages.create( model="claude-3-haiku-20240307", max_tokens=1000, messages=conversation, stream=True ) assistant_response = "" for chunk in stream: if chunk.type == "content_block_delta": print(chunk.delta.text, end="", flush=True) assistant_response += chunk.delta.text print() # New line conversation.append({"role": "assistant", "content": assistant_response}) # Run: chat_with_claude() or streaming_chat() ``` ## Summary Anthropic Courses provides a comprehensive learning path for integrating Claude AI into applications. The primary use cases include: building conversational AI assistants with multi-turn context, implementing streaming responses for real-time user interfaces, analyzing images and documents with vision capabilities, and extending Claude's functionality through tool use for calculations, API calls, and database queries. The courses demonstrate practical patterns for prompt engineering, including few-shot learning, system prompts for role definition, and structured output formatting. Integration patterns focus on the messages API as the core interface, with conversation history management for stateful interactions. For production applications, key patterns include: using streaming for responsive UIs, implementing tool use workflows for external data access, leveraging system prompts for consistent behavior, and applying appropriate model selection (Haiku for speed/cost, Sonnet for balance, Opus for complex tasks). The repository serves as both a learning resource and a reference for common Claude integration patterns used in real-world applications.