Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Theme
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Create API Key
Add Docs
Claude Codex Settings
https://github.com/fcakyon/claude-codex-settings
Admin
A personal setup for Claude Code/Desktop and OpenAI Codex, featuring battle-tested skills, commands,
...
Tokens:
798,432
Snippets:
8,541
Trust Score:
9.4
Update:
3 weeks ago
Context
Skills
Chat
Benchmark
54.4
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Claude Codex Settings Claude Codex Settings is a battle-tested plugin collection for Claude Code, Claude Desktop, and OpenAI Codex that provides skills, slash commands, autonomous subagents, hooks, and MCP server integrations. The repository delivers a complete development workflow toolkit featuring Git/GitHub automation, code formatting, web search, database exploration, cloud service integrations (Azure, GCloud, Supabase), browser testing with Playwright, and academic research tools. The plugin system uses the Claude Code plugin marketplace for easy installation and configuration. Each plugin can include skills (contextual best practices), commands (slash commands for common workflows), agents (autonomous subagents for complex tasks), hooks (pre/post tool execution handlers), and MCP server configurations. The modular architecture allows developers to install only the plugins they need while maintaining cross-tool compatibility through shared CLAUDE.md/AGENTS.md configuration files. ## Plugin Installation Install plugins from the marketplace using Claude Code's native plugin system. ```bash # Add the marketplace /plugin marketplace add fcakyon/claude-codex-settings # Install individual plugins as needed /plugin install github-dev@claude-settings # Git workflow + GitHub MCP /plugin install statusline-tools@claude-settings # Session + 5H usage statusline /plugin install azure-tools@claude-settings # Azure MCP (40+ services) /plugin install playwright-tools@claude-settings # Playwright MCP + E2E testing /plugin install supabase-tools@claude-settings # Supabase MCP + database patterns /plugin install tavily-tools@claude-settings # Web search + content extraction /plugin install anthropic-essentials@claude-settings # Feature dev, frontend, skills /plugin install phd-skills@claude-settings # Academic research toolkit # After installing MCP plugins, run setup /plugin-name:setup # e.g., /slack-tools:setup # Create symlink for cross-tool compatibility ln -sfn CLAUDE.md AGENTS.md ``` ## Git Commit Workflow Create commits following project standards with automatic staged file analysis. The commit workflow analyzes staged files only, generates conventional commit messages with proper formatting, and optionally updates documentation. The commit-creator agent handles the complete process from analyzing diffs to executing the commit. ```bash # Use slash command to commit staged changes /commit-staged [optional context about the changes] # Or run manually with conventional commit format git add src/feature.ts tests/feature.test.ts # Commit with HEREDOC for proper formatting git commit -m "$(cat <<'EOF' feat: implement user authentication system Added JWT-based auth with refresh token rotation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> EOF )" # Analyze staged files before committing git diff --cached --name-only # List staged files git diff --cached # Show staged changes git diff --cached --stat # Get diff statistics # Commit message format: # Line 1: {type}: brief description (max 50 chars) # Line 2: blank # Line 3+: Optional details with motivation/findings # Types: feat, fix, refactor, docs, style, test, build ``` ## Pull Request Creation Create pull requests with automatic branch analysis and PR body generation. The PR workflow verifies staged changes, creates feature branches if needed, analyzes all commits from the branch divergence point, and generates PR descriptions with inline code snippets. Use the pr-creator agent for the complete workflow. ```bash # Use slash command to create PR /create-pr [optional context] # Or use gh CLI directly # First, analyze all commits on your branch git diff main...HEAD # Full diff from branch point git log main..HEAD --oneline # All commits to include # Create PR with proper format gh pr create \ --title "Add user authentication system" \ --body "$(cat <<'EOF' Implement JWT-based authentication with secure token handling. - Add login/logout endpoints with rate limiting - Implement refresh token rotation for security - Add middleware for protected routes \`POST /api/auth/login -d '{"email": "user@example.com", "password": "***"}'\` EOF )" \ -a @me \ -r previous-reviewer # Find reviewers from your past PRs gh pr list --repo owner/repo --author @me --limit 5 ``` ## Tavily Web Search Use Tavily MCP tools for web search and content extraction. Tavily provides two main tools: search for discovery queries and extract for detailed content from specific URLs. The integration pattern is to search first, then extract relevant URLs for deeper analysis. ```bash # Search for information (discovery phase) mcp__tavily__tavily_search( query="Claude Code plugin development best practices", max_results=5 ) # Extract detailed content from specific URL mcp__tavily__tavily_extract( urls=["https://docs.anthropic.com/claude-code/plugins"] ) # Integration pattern: # 1. Use tavily_search to find relevant sources # 2. Analyze results to identify best URLs # 3. Use tavily_extract for detailed content on those URLs # 4. Process extracted content for user needs # Environment setup required export TAVILY_API_KEY="tvly-your-api-key" ``` ## Supabase Database Operations Query and explore Supabase databases with MCP tools and best practice patterns. Supabase MCP provides read-only database exploration with tools for listing tables, getting schemas, and executing SQL queries. The skill includes patterns for authentication, Row Level Security (RLS), relationships, and efficient query design. ```typescript // List all tables in the database mcp__supabase__list_tables() // Get schema for specific table mcp__supabase__get_table_schema({ table_name: "users" }) // Execute read-only SQL query mcp__supabase__execute_sql({ query: "SELECT id, email, created_at FROM users WHERE role = 'admin'" }) // JavaScript SDK query patterns const { data, error } = await supabase .from('posts') .select('id, title, author:users(name)') // Join with users table .eq('status', 'published') .order('created_at', { ascending: false }) .range(0, 9) // Pagination: first 10 items // RLS policy template create policy "users_own_data" on posts to authenticated for select using ( (select auth.uid()) = user_id ) with check ( (select auth.uid()) = user_id ); ``` ## Playwright E2E Testing Browser automation and E2E testing with Playwright MCP and best practices. The Playwright integration includes an MCP server for browser automation, a comprehensive testing skill with Page Object Model patterns, and a responsive-tester agent for cross-viewport testing. ```typescript // Page Object Model pattern export class LoginPage { constructor(private page: Page) {} async goto() { await this.page.goto('/login'); } async login(email: string, password: string) { await this.page.getByLabel('Email').fill(email); await this.page.getByLabel('Password').fill(password); await this.page.getByRole('button', { name: 'Sign in' }).click(); } } // Test with proper locator strategies test('successful login redirects to dashboard', async ({ page }) => { const loginPage = new LoginPage(page); await loginPage.goto(); await loginPage.login('user@example.com', 'password'); await expect(page).toHaveURL('/dashboard'); }); // Mock API responses await page.route('**/api/users', async (route) => { await route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify([{ id: 1, name: 'Test User' }]) }); }); // File upload handling await page.getByLabel('Upload file').setInputFiles('path/to/file.pdf'); // Run tests from CLI npx playwright test --ui # UI mode for debugging npx playwright test --headed # Show browser npx playwright test --debug # Inspector mode ``` ## Azure MCP Operations Interact with 40+ Azure services using Azure MCP Server with CLI authentication. Azure MCP provides tools for storage, Key Vault, Cosmos DB, AKS, and monitoring. Authentication uses Azure Identity SDK via `az login`. ```bash # Authenticate with Azure CLI az login # Storage operations mcp__azure__storage_accounts_list() mcp__azure__storage_blobs_list({ account: "myaccount", container: "data" }) mcp__azure__storage_blobs_upload({ account: "myaccount", container: "data", blob_name: "file.json", content: '{"key": "value"}' }) # Key Vault secrets mcp__azure__keyvault_secrets_list({ vault_name: "my-vault" }) mcp__azure__keyvault_secrets_get({ vault_name: "my-vault", secret_name: "api-key" }) mcp__azure__keyvault_secrets_set({ vault_name: "my-vault", secret_name: "new-secret", value: "secret-value" }) # Cosmos DB queries mcp__azure__cosmosdb_databases_list({ account: "mycosmosdb" }) mcp__azure__cosmosdb_query({ account: "mycosmosdb", database: "mydb", container: "items", query: "SELECT * FROM c WHERE c.status = 'active'" }) # Log Analytics mcp__azure__monitor_logs_query({ workspace_id: "workspace-guid", query: "AzureActivity | take 10" }) ``` ## Statusline Configuration Display session context, cost, and account-wide usage in Claude Code statusline. The statusline shows real-time usage metrics with color coding: green (<50%), yellow (50-80%), red (>80%). Supports both native Claude subscription/API and ccusage for third-party endpoints. ```bash # Install and configure /plugin install statusline-tools@claude-settings /statusline-tools:setup # Native statusline config (for Claude subscription/API) # Add to ~/.claude/settings.json or .claude/settings.local.json { "statusLine": { "type": "command", "command": "~/.claude/statusline.sh", "padding": 0 } } # ccusage statusline (for z.ai, third-party endpoints) { "statusLine": { "type": "command", "command": "npx -y ccusage@latest statusline --cost-source cc", "padding": 0 } } # Displays: [Session] context% $cost | [5H] usage% time-until-reset # Requires jq: brew install jq (macOS) or apt install jq (Linux) ``` ## Hook Development Create pre/post tool execution hooks for command interception and modification. Hooks receive JSON input via stdin and return JSON output. Use PreToolUse hooks for confirmation dialogs or command modification, PostToolUse for notifications or logging. ```python #!/usr/bin/env python3 """PreToolUse hook: show confirmation before git commit.""" import json import sys # Read hook input from Claude Code input_data = json.load(sys.stdin) tool_name = input_data.get("tool_name", "") tool_input = input_data.get("tool_input", {}) command = tool_input.get("command", "") # Only handle git commit commands if tool_name != "Bash" or not command.strip().startswith("git commit"): sys.exit(0) # Pass through other commands # Return confirmation dialog output = { "hookSpecificOutput": { "hookEventName": "PreToolUse", "permissionDecision": "ask", # "allow", "deny", or "ask" "permissionDecisionReason": f"Create commit?\n\nCommand: {command}" } } print(json.dumps(output)) ``` ```bash #!/usr/bin/env bash # PostToolUse hook: OS notification on task completion input=$(cat) message=$(echo "$input" | grep -o '"message":"[^"]*"' | cut -d'"' -f4) # Terminal bell for VSCode printf '\a' # OS notification if [[ "$OSTYPE" == "darwin"* ]]; then osascript -e "display notification \"${message}\" with title \"Claude Code\"" elif command -v notify-send &> /dev/null; then notify-send "Claude Code" "${message}" fi ``` ## GitHub CLI Operations Use gh CLI for all GitHub interactions without cloning repositories. ```bash # Read file content from repository gh api repos/owner/repo/contents/path/to/file.py -q .content | base64 -d # Search code across repositories gh search code "function authenticate" --repo owner/repo gh search code "async function" --language typescript # Search repositories gh search repos "claude plugin" --language python --sort stars # View and compare gh pr view 123 --repo owner/repo gh pr diff 123 --repo owner/repo gh api repos/owner/repo/compare/main...feature-branch # List and view issues gh issue view 456 --repo owner/repo gh api repos/owner/repo/commits --jq '.[].sha' # PR review comments gh api repos/owner/repo/pulls/123/comments ``` ## Alternative Model Configuration Configure Claude Code to use alternative LLM providers via API-compatible endpoints. ```bash # Kimi K2.5 via Moonshot API (Anthropic-compatible) export ANTHROPIC_BASE_URL=https://api.moonshot.ai/anthropic export ANTHROPIC_AUTH_TOKEN="your-moonshot-api-key" export ANTHROPIC_MODEL=kimi-k2.5 export ANTHROPIC_DEFAULT_OPUS_MODEL=kimi-k2.5 export ANTHROPIC_DEFAULT_SONNET_MODEL=kimi-k2.5 export ANTHROPIC_DEFAULT_HAIKU_MODEL=kimi-k2.5 export CLAUDE_CODE_SUBAGENT_MODEL=kimi-k2.5 export ENABLE_TOOL_SEARCH=false # Z.ai GLM models (85% cheaper than Claude) # Configure in ~/.claude/settings-zai.json # Main: GLM-5-Turbo, Fast: GLM-4.7-Flash # ccproxy for any LLM (Claude subscription, GitHub Copilot, local models) /plugin install ccproxy-tools@claude-settings /ccproxy-tools:setup ``` The claude-codex-settings repository serves as a comprehensive toolkit for developers working with Claude Code and similar AI coding assistants. The primary use cases include automated Git workflows (commits, PRs, code review), database exploration and query patterns (Supabase, MongoDB), cloud service management (Azure, GCloud), web research and content extraction (Tavily), browser testing (Playwright), and code formatting/quality hooks. Each plugin follows a consistent pattern of skills for contextual guidance, commands for common workflows, agents for complex autonomous tasks, and hooks for command interception. Integration with existing projects is straightforward through the plugin marketplace system. Developers can cherry-pick specific plugins based on their stack, configure MCP servers for external service access, and extend the system with custom hooks and skills. The shared CLAUDE.md/AGENTS.md configuration ensures consistent AI behavior across Claude Code, OpenAI Codex, Gemini CLI, Cursor, and other compatible tools. The modular architecture means plugins can be added incrementally as project needs evolve, from basic Git automation to full-stack development workflows with cloud services and testing infrastructure.