skip to content
Home Prabal
← Posts

Run Claude Code with Your ChatGPT Subscription

If you’re already paying for ChatGPT Plus, Pro, or Max, you can route Claude Code through your existing subscription. No API key, no separate billing. Your normal claude command stays untouched - you get a second command, claude-codex, that runs through your ChatGPT quota instead.

The trick is LiteLLM - an open-source proxy that translates between AI provider formats. You point Claude Code at a local LiteLLM proxy, and the proxy forwards everything to OpenAI’s ChatGPT backend using your subscription auth.

I’ve been using this setup daily. It works surprisingly well for most coding tasks, though there are real limitations worth knowing about upfront.

This guide is for macOS with conda already installed. If you don’t have it, install miniconda.

Step 1 - Install LiteLLM

Terminal window
conda create -n litellm python=3.12 -y
conda activate litellm
pip install "litellm[proxy]==1.83.0"
conda deactivate

Verify:

Terminal window
conda run -n litellm pip show litellm | grep Version
# Must show: Version: 1.83.0

Step 2 - Create the config

This tells LiteLLM how to map Claude Code’s model requests to ChatGPT models.

Terminal window
mkdir -p ~/.litellm
Terminal window
cat > ~/.litellm/chatgpt-config.yaml << 'EOF'
model_list:
- model_name: chatgpt-sonnet
model_info:
mode: responses
litellm_params:
model: chatgpt/gpt-5.4
supports_system_message: false
- model_name: chatgpt-haiku
model_info:
mode: responses
litellm_params:
model: chatgpt/gpt-5.3-codex-spark
supports_system_message: false
- model_name: chatgpt-opus
model_info:
mode: responses
litellm_params:
model: chatgpt/gpt-5.4-pro
supports_system_message: false
- model_name: chatgpt-codex
model_info:
mode: responses
litellm_params:
model: chatgpt/gpt-5.3-codex
supports_system_message: false
litellm_settings:
master_key: os.environ/LITELLM_MASTER_KEY
EOF

The model_name values are what Claude Code asks for. The model values under litellm_params are what actually gets called via your ChatGPT subscription. supports_system_message: false tells LiteLLM to convert system messages into user messages, since the ChatGPT subscription backend rejects system messages outright.

Claude Code asks forLiteLLM routes toBest for
chatgpt-sonnetchatgpt/gpt-5.4Default coding model
chatgpt-haikuchatgpt/gpt-5.3-codex-sparkFast tasks
chatgpt-opuschatgpt/gpt-5.4-proComplex reasoning
chatgpt-codexchatgpt/gpt-5.3-codexCode generation

Step 3 - Patch a bug in LiteLLM 1.83.0

Claude Code sends message content in the Anthropic format - arrays of content blocks like [{"type": "text", "text": "..."}]. LiteLLM’s system message converter assumes plain strings and crashes when it tries to concatenate them.

Find the file to patch:

Terminal window
conda activate litellm
FACTORY=$(python -c "import litellm; print(litellm.__path__[0])")/litellm_core_utils/prompt_templates/factory.py
echo "$FACTORY"
conda deactivate

Back it up, then open it:

Terminal window
cp "$FACTORY" "$FACTORY.bak"
nano "$FACTORY"

Search for the map_system_message_pt function and replace it entirely with:

def map_system_message_pt(messages: list) -> list:
"""
Convert system messages to user messages.
Handles both string content and Anthropic-style list content blocks.
"""
def _to_str(content):
if isinstance(content, str):
return content
if isinstance(content, list):
return " ".join(
block.get("text", "") for block in content
if isinstance(block, dict) and block.get("type") == "text"
)
return str(content)
new_messages = []
for m in messages:
if m["role"] == "system":
sys_text = _to_str(m["content"])
if new_messages and new_messages[-1]["role"] == "user":
prev_text = _to_str(new_messages[-1]["content"])
new_messages[-1]["content"] = sys_text + " " + prev_text
else:
new_messages.append({"role": "user", "content": sys_text})
else:
new_messages.append(m)
return new_messages

Save and close. No reinstall needed - Python picks it up on next import.

Step 4 - Generate a local auth key

This key secures the connection between Claude Code and the proxy. It never leaves your machine.

Terminal window
echo "sk-local-$(openssl rand -hex 16)" > ~/.litellm/master-key.txt
chmod 600 ~/.litellm/master-key.txt
cat ~/.litellm/master-key.txt

Step 5 - Create the launcher scripts

Two scripts: one to start the proxy, one to launch Claude Code through it.

The proxy launcher

cat > ~/.litellm/start-proxy.sh << 'SCRIPT'
#!/bin/bash
# Find conda regardless of install location (miniforge, miniconda, anaconda)
for p in "$HOME/miniforge3" "$HOME/miniconda3" "$HOME/anaconda3" "$HOME/.conda"; do
[ -x "$p/bin/conda" ] && eval "$($p/bin/conda shell.bash hook)" && break
done
conda activate litellm
export LITELLM_MASTER_KEY="$(cat ~/.litellm/master-key.txt)"
echo "═══════════════════════════════════════════════"
echo " LiteLLM proxy → ChatGPT subscription"
echo " Running on http://127.0.0.1:4000"
echo " Press Ctrl+C to stop"
echo "═══════════════════════════════════════════════"
litellm --config ~/.litellm/chatgpt-config.yaml --port 4000
SCRIPT
chmod +x ~/.litellm/start-proxy.sh

The claude-codex command

This script checks that the proxy is running, sets the right environment variables, and launches Claude Code.

mkdir -p ~/bin
cat > ~/bin/claude-codex << 'SCRIPT'
#!/bin/bash
#
# Launch Claude Code powered by ChatGPT subscription via LiteLLM.
# Your normal `claude` command is completely unaffected.
#
MASTER_KEY="$(cat ~/.litellm/master-key.txt 2>/dev/null)"
if [ -z "$MASTER_KEY" ]; then
echo "Error: No master key found at ~/.litellm/master-key.txt"
echo "Run the setup steps first."
exit 1
fi
if ! curl -s http://127.0.0.1:4000/health > /dev/null 2>&1; then
echo ""
echo " LiteLLM proxy is not running."
echo ""
echo "Start it first in another terminal tab:"
echo " ~/.litellm/start-proxy.sh"
echo ""
exit 1
fi
export ANTHROPIC_BASE_URL="http://127.0.0.1:4000"
export ANTHROPIC_AUTH_TOKEN="$MASTER_KEY"
export ANTHROPIC_MODEL="chatgpt-sonnet"
export ANTHROPIC_DEFAULT_SONNET_MODEL="chatgpt-sonnet"
export ANTHROPIC_DEFAULT_HAIKU_MODEL="chatgpt-haiku"
export ANTHROPIC_DEFAULT_OPUS_MODEL="chatgpt-opus"
exec claude "$@"
SCRIPT
chmod +x ~/bin/claude-codex

Add ~/bin to your PATH:

Terminal window
echo 'export PATH="$HOME/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc

Step 6 - First run and OAuth login

Terminal tab 1 - start the proxy:

Terminal window
~/.litellm/start-proxy.sh

Terminal tab 2 - launch Claude Code:

Terminal window
claude-codex

On the first request, the proxy terminal will display:

Please visit: https://auth0.openai.com/activate
And enter code: XXXX-XXXX

ChatGPT may prompt you to enable device authentication in your account settings before this works. Follow whatever instructions they show before proceeding.

Open the URL, log in with your OpenAI account, enter the code. Tokens are saved locally - you won’t need to repeat this unless they expire.

Daily usage

Tab 1: ~/.litellm/start-proxy.sh <- leave running
Tab 2: claude-codex <- ChatGPT-powered Claude Code
claude <- normal Claude Code (unchanged)

Switch models on the fly:

Terminal window
claude-codex --model chatgpt-codex # GPT-5.3 Codex for code-heavy tasks
claude-codex --model chatgpt-opus # GPT-5.4 Pro for complex reasoning

Optional: Auto-start LiteLLM on login (macOS)

Tired of keeping a terminal tab open for the proxy? On macOS, a launchd agent starts it on login, restarts it if it crashes, and logs output. No second terminal needed.

Create the plist:

Terminal window
cat > ~/Library/LaunchAgents/com.litellm.proxy.plist << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.litellm.proxy</string>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>-c</string>
<string>source ~/.litellm/start-proxy.sh</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/tmp/litellm-proxy.log</string>
<key>StandardErrorPath</key>
<string>/tmp/litellm-proxy.err</string>
</dict>
</plist>
EOF

Load it:

Terminal window
launchctl load ~/Library/LaunchAgents/com.litellm.proxy.plist

That’s it. It starts now, and on every future login. Manage it with:

Terminal window
launchctl list | grep litellm # check if running
launchctl stop com.litellm.proxy # stop
launchctl start com.litellm.proxy # start
launchctl unload ~/Library/LaunchAgents/com.litellm.proxy.plist # remove permanently
tail -f /tmp/litellm-proxy.log # check logs

File locations

FilePurpose
~/.litellm/chatgpt-config.yamlModel routing config
~/.litellm/master-key.txtLocal auth key
~/.litellm/start-proxy.shProxy launcher
~/bin/claude-codexThe claude-codex command
conda env: litellmPython env with LiteLLM
~/.claude/settings.jsonNot modified

Available ChatGPT subscription models

Model stringBest for
chatgpt/gpt-5.4Latest GPT, strong all-rounder
chatgpt/gpt-5.4-proHardest problems, deep reasoning (Pro/Max only)
chatgpt/gpt-5.3-codexCode generation
chatgpt/gpt-5.3-codex-sparkFast code tasks
chatgpt/gpt-5.3-instantFastest responses
chatgpt/gpt-5.3-chat-latestGeneral conversation

Troubleshooting

“Model not found” - Claude Code is requesting a model name the proxy doesn’t recognize. Check the proxy terminal for the exact string, then add it to chatgpt-config.yaml or adjust the env vars in ~/bin/claude-codex.

OAuth token expired - The proxy terminal will re-prompt with a new device code. Follow the browser auth flow again.

Rate limits - Your subscription tier determines limits. Plus ($20/mo) has lower ceilings; Pro and Max are better for heavy agentic use.

“System messages are not allowed” - You’re missing supports_system_message: false under litellm_params in your config. See Step 2.

“can only concatenate list (not str) to list” - You haven’t applied the patch from Step 3. Claude Code sends content as Anthropic-format arrays and LiteLLM’s merger assumes plain strings.

Verify LiteLLM is safe:

Terminal window
conda run -n litellm pip show litellm | grep Version # must show exactly 1.83.0
find ~ -name "litellm_init.pth" 2>/dev/null # should find nothing
ls ~/.config/sysmon/ 2>/dev/null # should not exist