Foxl Docs

Models

AI models available in Foxl

Foxl supports models from multiple providers. Use Foxl relay credits, your own API keys (BYOK), or your ChatGPT Plus/Pro subscription (OAuth).

Foxl Relay Models

Included with your Foxl plan. No setup needed - just sign in and chat.

ModelProviderContextBest For
Claude Opus 4.7Bedrock1M tokensMost capable for complex reasoning + agentic coding
Claude Opus 4.6Bedrock1M tokensPrevious flagship; still 1M context
Claude Sonnet 4.6Bedrock1M tokensBest balance of speed and quality
Claude Haiku 4.5Bedrock200K tokensFastest model, near-frontier intelligence
GLM 5Bedrock (Z.AI)200K tokensAgentic coding, long-horizon tasks

Subscription (OAuth) Models

Use your existing subscriptions. Desktop only - see Providers for setup. Foxl calls the vendor API directly with your OAuth token; tool use, adaptive thinking, and streaming all flow through Foxl's normal agent loop.

Claude Code (Anthropic Pro/Max)

ModelContextBest For
Opus 4.7 (Claude Code mode)1M tokensComplex reasoning, coding, analysis
Sonnet 4.6 (Claude Code mode)1M tokensBalanced speed and quality

Claude Code (OAuth) runs in compatibility mode so requests route through your Claude Pro/Max subscription instead of pay-as-you-go "Extra usage". In this mode, Foxl-specific tools (memory, subagents, schedules, channel send, browser extension, view image) are disabled - only Bash, Read, Grep, and WebFetch are available. Haiku 4.5 is not exposed in this mode. For the full Foxl tool surface, use an Anthropic API key (BYOK) or the foxl.ai relay.

OpenAI (ChatGPT Plus/Pro)

ModelContextBest For
GPT-5.51M tokensLatest frontier, reasoning + streaming
GPT-5.41M tokensGeneral-purpose reasoning
GPT-5.4 Mini400K tokensFast, cost-aware tasks
GPT-5.3 Codex400K tokensCode-heavy tasks

Also exposed as a built-in tool: gpt-image-2 via the generate_image tool. No API key, no per-image billing - powered by your ChatGPT Plus/Pro OAuth session. Accepts up to 10 input images for edit / compose mode.

Gemini CLI (Google)

ModelContextBest For
Gemini 2.5 Pro1M tokensLong documents, deep analysis
Gemini 2.5 Flash1M tokensFast, cost-effective

BYOK Models

Bring your own API key to access models from any provider. See AI Providers for setup.

ModelProviderContextAPI Key From
Claude Opus 4.7Anthropic1M tokensconsole.anthropic.com
Claude Opus 4.6Anthropic1M tokensconsole.anthropic.com
Claude Sonnet 4.6Anthropic1M tokensconsole.anthropic.com
Claude Haiku 4.5Anthropic200K tokensconsole.anthropic.com
GPT-4.1OpenAI1M tokensplatform.openai.com
GPT-4.1 MiniOpenAI1M tokensplatform.openai.com
o3OpenAI200K tokensplatform.openai.com
Gemini 2.5 ProGoogle1M tokensaistudio.google.com
Gemini 2.5 FlashGoogle1M tokensaistudio.google.com
Llama 3, Mistral, etc.OllamaVariesFree - ollama.com

Adaptive Thinking

Claude Opus 4.7, Opus 4.6, and Sonnet 4.6 support adaptive thinking - Claude dynamically decides when and how much to think based on the complexity of your request. No manual budget setting needed.

  • Simple questions: Claude responds directly without thinking overhead
  • Complex problems: Claude automatically engages deep reasoning
  • Agentic workflows: Claude can think between tool calls (interleaved thinking)

Adaptive thinking (type: "adaptive") is the recommended mode for Opus 4.7, Opus 4.6, and Sonnet 4.6. Haiku 4.5 uses type: "enabled" with a budget_tokens parameter instead.

Haiku 4.5 also supports extended thinking with type: "enabled" and budget_tokens.

You can toggle thinking on/off in the model selector.

Thinking and Cost

Thinking consumes output tokens. When thinking is enabled, the model may use 2-10x more output tokens depending on task complexity. This directly increases credit cost:

  • Simple question without thinking: ~0.01 credits (Sonnet)
  • Same question with thinking: ~0.05-0.15 credits (Sonnet)
  • Complex reasoning with thinking: ~0.30-1.0 credits (Opus)

For cost-sensitive usage, disable thinking for simple tasks. For complex coding, analysis, or multi-step reasoning, thinking significantly improves quality and is worth the extra cost. See Credits for detailed per-token pricing.

Model Selection

Desktop App

Click the model name in the chat input area to switch models. Your selection persists across conversations.

Web App

Click the model selector dropdown to choose a different model. The selection is saved in your browser.

Bring Your Own Key (Desktop Only)

In the desktop app, you can use your own API keys instead of Foxl credits:

  1. Go to Settings
  2. Select a provider (Anthropic, OpenAI, Google, etc.)
  3. Enter your API key
  4. Select a model from that provider

When using your own keys, no Foxl credits are consumed.

Subscription OAuth

You can use your ChatGPT Plus/Pro subscription directly - no API key needed.

OpenAI OAuth

  1. Run npx @openai/codex login to create ~/.codex/auth.json
  2. Select OpenAI (OAuth) in Settings > Providers - Foxl calls OpenAI's Codex Responses API directly with your subscription token

Foxl auto-detects your credentials. See AI Providers for details on all supported providers.

On this page