Run Claude Code with Your ChatGPT Subscription
Table of Contents
If you’re already paying for ChatGPT Plus, Pro, or Max, you can route Claude Code through your existing subscription. No API key, no separate billing. Your normal claude command stays untouched - you get a second command, claude-codex, that runs through your ChatGPT quota instead.
The trick is LiteLLM - an open-source proxy that translates between AI provider formats. You point Claude Code at a local LiteLLM proxy, and the proxy forwards everything to OpenAI’s ChatGPT backend using your subscription auth.
I’ve been using this setup daily. It works surprisingly well for most coding tasks, though there are real limitations worth knowing about upfront.
This guide is for macOS with conda already installed. If you don’t have it, install miniconda.
Step 1 - Install LiteLLM
conda create -n litellm python=3.12 -yconda activate litellmpip install "litellm[proxy]==1.83.0"conda deactivateVerify:
conda run -n litellm pip show litellm | grep Version# Must show: Version: 1.83.0Step 2 - Create the config
This tells LiteLLM how to map Claude Code’s model requests to ChatGPT models.
mkdir -p ~/.litellmcat > ~/.litellm/chatgpt-config.yaml << 'EOF'model_list: - model_name: chatgpt-sonnet model_info: mode: responses litellm_params: model: chatgpt/gpt-5.4 supports_system_message: false
- model_name: chatgpt-haiku model_info: mode: responses litellm_params: model: chatgpt/gpt-5.3-codex-spark supports_system_message: false
- model_name: chatgpt-opus model_info: mode: responses litellm_params: model: chatgpt/gpt-5.4-pro supports_system_message: false
- model_name: chatgpt-codex model_info: mode: responses litellm_params: model: chatgpt/gpt-5.3-codex supports_system_message: false
litellm_settings: master_key: os.environ/LITELLM_MASTER_KEYEOFThe model_name values are what Claude Code asks for. The model values under litellm_params are what actually gets called via your ChatGPT subscription. supports_system_message: false tells LiteLLM to convert system messages into user messages, since the ChatGPT subscription backend rejects system messages outright.
| Claude Code asks for | LiteLLM routes to | Best for |
|---|---|---|
chatgpt-sonnet | chatgpt/gpt-5.4 | Default coding model |
chatgpt-haiku | chatgpt/gpt-5.3-codex-spark | Fast tasks |
chatgpt-opus | chatgpt/gpt-5.4-pro | Complex reasoning |
chatgpt-codex | chatgpt/gpt-5.3-codex | Code generation |
Step 3 - Patch a bug in LiteLLM 1.83.0
Claude Code sends message content in the Anthropic format - arrays of content blocks like [{"type": "text", "text": "..."}]. LiteLLM’s system message converter assumes plain strings and crashes when it tries to concatenate them.
Find the file to patch:
conda activate litellmFACTORY=$(python -c "import litellm; print(litellm.__path__[0])")/litellm_core_utils/prompt_templates/factory.pyecho "$FACTORY"conda deactivateBack it up, then open it:
cp "$FACTORY" "$FACTORY.bak"nano "$FACTORY"Search for the map_system_message_pt function and replace it entirely with:
def map_system_message_pt(messages: list) -> list: """ Convert system messages to user messages. Handles both string content and Anthropic-style list content blocks. """ def _to_str(content): if isinstance(content, str): return content if isinstance(content, list): return " ".join( block.get("text", "") for block in content if isinstance(block, dict) and block.get("type") == "text" ) return str(content)
new_messages = [] for m in messages: if m["role"] == "system": sys_text = _to_str(m["content"]) if new_messages and new_messages[-1]["role"] == "user": prev_text = _to_str(new_messages[-1]["content"]) new_messages[-1]["content"] = sys_text + " " + prev_text else: new_messages.append({"role": "user", "content": sys_text}) else: new_messages.append(m) return new_messagesSave and close. No reinstall needed - Python picks it up on next import.
Step 4 - Generate a local auth key
This key secures the connection between Claude Code and the proxy. It never leaves your machine.
echo "sk-local-$(openssl rand -hex 16)" > ~/.litellm/master-key.txtchmod 600 ~/.litellm/master-key.txtcat ~/.litellm/master-key.txtStep 5 - Create the launcher scripts
Two scripts: one to start the proxy, one to launch Claude Code through it.
The proxy launcher
cat > ~/.litellm/start-proxy.sh << 'SCRIPT'#!/bin/bash# Find conda regardless of install location (miniforge, miniconda, anaconda)for p in "$HOME/miniforge3" "$HOME/miniconda3" "$HOME/anaconda3" "$HOME/.conda"; do [ -x "$p/bin/conda" ] && eval "$($p/bin/conda shell.bash hook)" && breakdoneconda activate litellmexport LITELLM_MASTER_KEY="$(cat ~/.litellm/master-key.txt)"echo "═══════════════════════════════════════════════"echo " LiteLLM proxy → ChatGPT subscription"echo " Running on http://127.0.0.1:4000"echo " Press Ctrl+C to stop"echo "═══════════════════════════════════════════════"litellm --config ~/.litellm/chatgpt-config.yaml --port 4000SCRIPT
chmod +x ~/.litellm/start-proxy.shThe claude-codex command
This script checks that the proxy is running, sets the right environment variables, and launches Claude Code.
mkdir -p ~/bincat > ~/bin/claude-codex << 'SCRIPT'#!/bin/bash## Launch Claude Code powered by ChatGPT subscription via LiteLLM.# Your normal `claude` command is completely unaffected.#
MASTER_KEY="$(cat ~/.litellm/master-key.txt 2>/dev/null)"
if [ -z "$MASTER_KEY" ]; then echo "Error: No master key found at ~/.litellm/master-key.txt" echo "Run the setup steps first." exit 1fi
if ! curl -s http://127.0.0.1:4000/health > /dev/null 2>&1; then echo "" echo " LiteLLM proxy is not running." echo "" echo "Start it first in another terminal tab:" echo " ~/.litellm/start-proxy.sh" echo "" exit 1fi
export ANTHROPIC_BASE_URL="http://127.0.0.1:4000"export ANTHROPIC_AUTH_TOKEN="$MASTER_KEY"export ANTHROPIC_MODEL="chatgpt-sonnet"export ANTHROPIC_DEFAULT_SONNET_MODEL="chatgpt-sonnet"export ANTHROPIC_DEFAULT_HAIKU_MODEL="chatgpt-haiku"export ANTHROPIC_DEFAULT_OPUS_MODEL="chatgpt-opus"
exec claude "$@"SCRIPT
chmod +x ~/bin/claude-codexAdd ~/bin to your PATH:
echo 'export PATH="$HOME/bin:$PATH"' >> ~/.zshrcsource ~/.zshrcStep 6 - First run and OAuth login
Terminal tab 1 - start the proxy:
~/.litellm/start-proxy.shTerminal tab 2 - launch Claude Code:
claude-codexOn the first request, the proxy terminal will display:
Please visit: https://auth0.openai.com/activateAnd enter code: XXXX-XXXXChatGPT may prompt you to enable device authentication in your account settings before this works. Follow whatever instructions they show before proceeding.
Open the URL, log in with your OpenAI account, enter the code. Tokens are saved locally - you won’t need to repeat this unless they expire.
Daily usage
Tab 1: ~/.litellm/start-proxy.sh <- leave runningTab 2: claude-codex <- ChatGPT-powered Claude Code claude <- normal Claude Code (unchanged)Switch models on the fly:
claude-codex --model chatgpt-codex # GPT-5.3 Codex for code-heavy tasksclaude-codex --model chatgpt-opus # GPT-5.4 Pro for complex reasoningOptional: Auto-start LiteLLM on login (macOS)
Tired of keeping a terminal tab open for the proxy? On macOS, a launchd agent starts it on login, restarts it if it crashes, and logs output. No second terminal needed.
Create the plist:
cat > ~/Library/LaunchAgents/com.litellm.proxy.plist << 'EOF'<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"><dict> <key>Label</key> <string>com.litellm.proxy</string>
<key>ProgramArguments</key> <array> <string>/bin/bash</string> <string>-c</string> <string>source ~/.litellm/start-proxy.sh</string> </array>
<key>RunAtLoad</key> <true/>
<key>KeepAlive</key> <true/>
<key>StandardOutPath</key> <string>/tmp/litellm-proxy.log</string>
<key>StandardErrorPath</key> <string>/tmp/litellm-proxy.err</string></dict></plist>EOFLoad it:
launchctl load ~/Library/LaunchAgents/com.litellm.proxy.plistThat’s it. It starts now, and on every future login. Manage it with:
launchctl list | grep litellm # check if runninglaunchctl stop com.litellm.proxy # stoplaunchctl start com.litellm.proxy # startlaunchctl unload ~/Library/LaunchAgents/com.litellm.proxy.plist # remove permanentlytail -f /tmp/litellm-proxy.log # check logsFile locations
| File | Purpose |
|---|---|
~/.litellm/chatgpt-config.yaml | Model routing config |
~/.litellm/master-key.txt | Local auth key |
~/.litellm/start-proxy.sh | Proxy launcher |
~/bin/claude-codex | The claude-codex command |
conda env: litellm | Python env with LiteLLM |
~/.claude/settings.json | Not modified |
Available ChatGPT subscription models
| Model string | Best for |
|---|---|
chatgpt/gpt-5.4 | Latest GPT, strong all-rounder |
chatgpt/gpt-5.4-pro | Hardest problems, deep reasoning (Pro/Max only) |
chatgpt/gpt-5.3-codex | Code generation |
chatgpt/gpt-5.3-codex-spark | Fast code tasks |
chatgpt/gpt-5.3-instant | Fastest responses |
chatgpt/gpt-5.3-chat-latest | General conversation |
Troubleshooting
“Model not found” - Claude Code is requesting a model name the proxy doesn’t recognize. Check the proxy terminal for the exact string, then add it to chatgpt-config.yaml or adjust the env vars in ~/bin/claude-codex.
OAuth token expired - The proxy terminal will re-prompt with a new device code. Follow the browser auth flow again.
Rate limits - Your subscription tier determines limits. Plus ($20/mo) has lower ceilings; Pro and Max are better for heavy agentic use.
“System messages are not allowed” - You’re missing supports_system_message: false under litellm_params in your config. See Step 2.
“can only concatenate list (not str) to list” - You haven’t applied the patch from Step 3. Claude Code sends content as Anthropic-format arrays and LiteLLM’s merger assumes plain strings.
Verify LiteLLM is safe:
conda run -n litellm pip show litellm | grep Version # must show exactly 1.83.0find ~ -name "litellm_init.pth" 2>/dev/null # should find nothingls ~/.config/sysmon/ 2>/dev/null # should not exist