feat(providers,auth,oauth): add provider profiles, welcome auth, and OAuth flows#2977
feat(providers,auth,oauth): add provider profiles, welcome auth, and OAuth flows#2977bloodf wants to merge 15 commits intoultraworkers:mainfrom
Conversation
Claw could only target one generic OPENAI_BASE_URL at a time, which made Z.AI, MiniMax, Moonshot, and OpenAI switching depend on shell mutation. Add modelProviders config profiles, provider/model resolution for /model, and runtime construction from the selected profile while preserving existing built-in provider routing. Constraint: Keep existing env-based Anthropic, xAI, OpenAI, and DashScope routing intact. Rejected: Reuse aliases only | aliases cannot carry base URL or credential source to the runtime client. Confidence: medium Scope-risk: moderate Directive: Keep model provider credentials env-first; avoid requiring source-controlled apiKey values. Tested: cargo fmt; cargo test -p runtime parses_model_provider_profiles_from_settings -- --nocapture; cargo test -p rusty-claude-cli configured_model_provider -- --nocapture Not-tested: Live calls to every configured third-party provider.
The provider selector now supports both OpenAI-compatible and Anthropic-compatible configured providers, so OpenCode-style MiniMax and Kimi coding endpoints can be selected through /model without pretending they use OpenAI chat completions. The /login wizard writes provider profiles with protocol, base URL, env var, models, and default model, using current OpenCode provider ids and model lists. Constraint: OpenCode reports minimax-coding-plan and kimi-for-coding as Anthropic-compatible endpoints Rejected: Put MiniMax and Kimi coding models under openai-compatible profiles | the selector would resolve but runtime calls would use the wrong wire protocol Confidence: high Scope-risk: moderate Directive: Keep login presets aligned with provider protocol, not only model names Tested: cargo fmt --check Tested: cargo check -p runtime -p api -p commands -p tools -p rusty-claude-cli Tested: cargo test -p rusty-claude-cli configured_model_provider -- --nocapture Tested: cargo test -p rusty-claude-cli login_subcommand_parses_and_logout_errors_helpfully -- --nocapture Tested: cargo run -p rusty-claude-cli -- --model minimax-coding-plan status --output-format json Tested: cargo run -p rusty-claude-cli -- --model kimi-for-coding status --output-format json Not-tested: live API requests to each external provider model
When opening claw without a configured provider API key, users now see an interactive welcome screen instead of a hard error. They can select a provider, enter their API key, and continue using claw immediately. - Automatically shown at REPL startup or prompt mode when auth is missing - Lists built-in providers (Anthropic, OpenAI, xAI) - Sets the env var for the current process so claw works immediately - Optionally saves the model choice to ~/.claw/settings.json - Supports cancellation (press Enter without entering a key) - Authenticate with a specific provider directly: `claw auth openai` - Without a provider argument, shows the interactive picker - Sets env vars for the current session - Added `BuiltinProvider` struct and constants for built-in providers - Added `check_model_auth_available()` to detect missing credentials - Added `run_provider_welcome()` for the interactive onboarding flow - Added `run_auth_command()` for the CLI subcommand - Hooked welcome screen into `run_repl()` and `Prompt` mode dispatch - Added `CliAction::Auth` variant and `parse_args` support - Updated help text and typo-suggestion list - Added unit tests for auth subcommand parsing - `cargo check --workspace` passes - New unit tests: `parse_args_auth_without_provider`, `parse_args_auth_with_provider`
…/Kimi Adds browser-based OAuth flows on top of the auth/provider infrastructure: - OpenAI: PKCE flow via auth.openai.com (ChatGPT/Codex accounts) - Moonshot / Kimi: Device Authorization Flow (RFC 8628) **New infrastructure:** - Per-provider OAuth token storage in ~/.claw/credentials.json - Local HTTP callback server for PKCE redirect handling - Browser launcher (open/xdg-open/start) - Device Authorization Flow polling **API client integration:** - OpenAiCompatClient falls back to saved OAuth tokens when env var unset - Bearer token authentication for OAuth providers **CLI integration:** - Welcome screen shows [OAuth] tag for supported providers - OAuth offered as recommended auth method when available - claw auth <provider> prompts to choose OAuth or API key
|
REQUEST_CHANGES from dogfood/review on head This supersedes #2975, and the original auth dogfood blocker is only partially addressed. Blockers:
Dogfood / verification:
Concrete delta needed: make built-in auth either persist securely or stop claiming durable auth, use hidden/no-echo input for all API-key/token prompts, and write OAuth credentials with restrictive permissions. — |
- Add runtime OAuth tests: per-provider storage round-trip, legacy key preservation, callback server code/state capture, callback server error handling, HTML escaping - Add CLI auth tests: OAuth config detection, API key fallback order, provider metadata checks Runtime: 15 tests pass, CLI: 7 tests pass
OpenAI was listed twice in the welcome screen because it existed in both BUILTIN_PROVIDERS and LOGIN_PROVIDER_TEMPLATES. Remove the duplicate from templates since OpenAI is already a built-in provider with OAuth support.
- Add configurable redirect_path to ProviderOAuthConfig (default /callback, OpenAI uses /auth/callback to match Codex CLI) - Update callback server to validate against configurable path instead of hardcoded /callback - Remove non-standard 'state' parameter from token exchange request body - Add OpenAI-specific query params: id_token_add_organizations=true and codex_cli_simplified_flow=true for drop-in Codex CLI compatibility - Export loopback_redirect_uri_with_path from runtime
Add originator=codex_cli_rs to the OpenAI authorization URL to match the official Codex CLI OAuth flow exactly. This parameter is required by auth.openai.com for proper request handling. Refs: openai/codex#7184, 7shi/codex-oauth
…d Moonshot routing - Fix configured_provider_for_model to check saved OAuth tokens when env var is unset. Previously, users with modelProviders.openai in settings.json who authenticated via OAuth would get 'requires env var OPENAI_API_KEY' because the function only checked apiKey and apiKeyEnv. - Fix check_model_auth_available to use metadata_for_model for prefix-aware auth checking (openai/, moonshot/, gpt-, etc.), ensuring each provider gets its correct env var and OAuth store key. - Add moonshot/ prefix to metadata_for_model so detect_provider_kind routes Moonshot models correctly instead of falling through to Anthropic default. - Add OpenAiCompatConfig::moonshot() with DEFAULT_MOONSHOT_BASE_URL for native Moonshot API endpoint support. - Update ProviderClient construction to use metadata_for_model for prefix-aware config selection, enabling OAuth fallback for Moonshot too.
- check_model_auth_available: for ANY provider/model prefix, check OAuth tokens under that provider name AND .claw.json config. Previously only hardcoded openai/moonshot OAuth checks existed. - run_auth_command: now recognizes custom providers from .claw.json, allowing to work for any configured provider. - run_provider_welcome: shows custom providers from .claw.json in the interactive picker, so users can authenticate with them without memorizing env var names. - Template/built-in OAuth flows were already generic; these changes make the surrounding auth gates and CLI commands match that generality.
OpenAI's OAuth (auth.openai.com, Codex CLI client_id) produces ChatGPT/WHAM-backend tokens, NOT Platform API tokens. These tokens authenticate your ChatGPT account, not your OpenAI Platform account. They work with chatgpt.com/backend-api but return 401 Unauthorized (Missing scopes: model.request) on api.openai.com/v1. OpenAI Platform API requires API keys (sk-...) only. Changes: - Remove OAuth config from BUILTIN_PROVIDERS[openai] - Skip OAuth fallback in check_model_auth_available for openai prefix & bare names - Skip OAuth fallback in configured_provider_for_model for openai provider - Use from_env (not from_env_or_oauth) for OpenAI in ProviderClient - Update tests: assert openai has no OAuth, test moonshot OAuth instead
Only show built-in and additional (template) providers in the interactive welcome/auth flow. Custom providers from .claw.json are no longer listed.
OpenAI OAuth tokens (from auth.openai.com) are ChatGPT/WHAM-backend tokens, NOT Platform API tokens. This implements full support for using them: - Add id_token to OAuthTokenSet for chatgpt_account_id extraction - Add extract_chatgpt_account_id() from JWT payload (no sig verification) - Add refresh_oauth_token() async function for token refresh - Create WhamClient with Responses API streaming support - Endpoint: chatgpt.com/backend-api/wham/responses - SSE parser for response.output_text.delta events - ChatGPT-Account-Id header from JWT - Automatic token refresh before requests if <60s until expiry - Route OpenAI OAuth through WhamClient in ProviderClient and CLI - Override base_url to WHAM backend when OpenAI OAuth is used in config Token refresh: POST to auth.openai.com/oauth/token with refresh_token grant. New tokens are persisted back to ~/.claw/credentials.json automatically.
- Add auto token refresh for custom providers using OAuth
- Extend ConfiguredModelProvider with oauth_token_set, token_url, client_id
- Lookup LOGIN_PROVIDER_TEMPLATES for refresh config when custom provider
falls back to saved OAuth tokens
- Add ProviderClient::from_openai_compatible_oauth() constructor
- OpenAiCompatClient now supports from_oauth_token_set with auto-refresh
- Add `claw model [MODEL]` CLI command
- Lists all available models (built-in + templates + custom providers)
- Sets default model in ~/.claw/settings.json when given a model name
- Accepts both `model` and `models` aliases
- Fix test isolation for users with workspace-write permission config
- Set RUSTY_CLAUDE_PERMISSION_MODE=danger-full-access in 10 tests
- Prevents user ~/.claw/settings.json from leaking into test assertions
Kimi For Coding API (api.kimi.com/coding/v1) restricts access to whitelisted coding agents (Claude Code, Kilo Code, etc.). It validates the User-Agent header case-sensitively — only lowercase variants like `claude-code/1.0` are accepted; `Claude-Code/1.0` is rejected. Changes: - Add optional `user_agent` field to OpenAiCompatClient - Add `with_user_agent()` builder method - Add `ProviderClient::with_user_agent()` to propagate header - In AnthropicRuntimeClient::new, detect api.kimi.com base URLs and automatically set User-Agent to `claude-code/0.1.0` - Restore Kimi model names (k2p5, k2p6, kimi-k2-thinking) in config; all are accepted by the API Also fixed minimax-coding-plan config: changed from anthropic-compatible (wrong) to openai-compatible with baseUrl https://api.minimax.io/v1. Tested live: - zai-coding-plan/glm-5.1 ✅ - minimax-coding-plan/MiniMax-M2.7 ✅ - kimi-for-coding/k2p6 ✅
Review Checklist — Provider Profiles + Auth + OAuthUser-facing behavior added
Config keys / CLI flags changed
Migration or compatibility notes
Tests run locally
Known risks / non-goals
|
This PR combines the provider profile system, interactive authentication, and OAuth support into a single cohesive feature.
Provider Profiles (replaces #2933)
Welcome & Auth (replaces #2975)
OAuth (replaces #2976)
Infrastructure:
Testing: