Skip to content

feat(providers,auth,oauth): add provider profiles, welcome auth, and OAuth flows#2977

Open
bloodf wants to merge 15 commits intoultraworkers:mainfrom
bloodf:feature/combined-providers-auth-oauth
Open

feat(providers,auth,oauth): add provider profiles, welcome auth, and OAuth flows#2977
bloodf wants to merge 15 commits intoultraworkers:mainfrom
bloodf:feature/combined-providers-auth-oauth

Conversation

@bloodf
Copy link
Copy Markdown

@bloodf bloodf commented May 1, 2026

This PR combines the provider profile system, interactive authentication, and OAuth support into a single cohesive feature.

Provider Profiles (replaces #2933)

  • Protocol-aware model provider profiles ( in settings.json)
  • Supports both and providers
  • wizard with presets for Z.AI, MiniMax, Kimi, Moonshot, OpenAI
  • Runtime dispatches configured profiles by their declared protocol

Welcome & Auth (replaces #2975)

  • Interactive welcome screen when no API key is detected
  • for direct provider authentication
  • Provider picker with built-in and template providers
  • API key entry with optional save to

OAuth (replaces #2976)

Provider Method Status
OpenAI PKCE via auth.openai.com
Moonshot / Kimi Device Authorization Flow
Others API key only

Infrastructure:

  • Per-provider OAuth token storage in
  • Local HTTP callback server for PKCE
  • Browser launcher + Device Flow polling
  • API client auto-falls back to saved OAuth tokens

Testing:

  • passes
  • passes
  • passes

bloodf added 4 commits May 1, 2026 13:13
Claw could only target one generic OPENAI_BASE_URL at a time, which made Z.AI, MiniMax, Moonshot, and OpenAI switching depend on shell mutation. Add modelProviders config profiles, provider/model resolution for /model, and runtime construction from the selected profile while preserving existing built-in provider routing.

Constraint: Keep existing env-based Anthropic, xAI, OpenAI, and DashScope routing intact.

Rejected: Reuse aliases only | aliases cannot carry base URL or credential source to the runtime client.

Confidence: medium

Scope-risk: moderate

Directive: Keep model provider credentials env-first; avoid requiring source-controlled apiKey values.

Tested: cargo fmt; cargo test -p runtime parses_model_provider_profiles_from_settings -- --nocapture; cargo test -p rusty-claude-cli configured_model_provider -- --nocapture

Not-tested: Live calls to every configured third-party provider.
The provider selector now supports both OpenAI-compatible and Anthropic-compatible configured providers, so OpenCode-style MiniMax and Kimi coding endpoints can be selected through /model without pretending they use OpenAI chat completions. The /login wizard writes provider profiles with protocol, base URL, env var, models, and default model, using current OpenCode provider ids and model lists.

Constraint: OpenCode reports minimax-coding-plan and kimi-for-coding as Anthropic-compatible endpoints

Rejected: Put MiniMax and Kimi coding models under openai-compatible profiles | the selector would resolve but runtime calls would use the wrong wire protocol

Confidence: high

Scope-risk: moderate

Directive: Keep login presets aligned with provider protocol, not only model names

Tested: cargo fmt --check

Tested: cargo check -p runtime -p api -p commands -p tools -p rusty-claude-cli

Tested: cargo test -p rusty-claude-cli configured_model_provider -- --nocapture

Tested: cargo test -p rusty-claude-cli login_subcommand_parses_and_logout_errors_helpfully -- --nocapture

Tested: cargo run -p rusty-claude-cli -- --model minimax-coding-plan status --output-format json

Tested: cargo run -p rusty-claude-cli -- --model kimi-for-coding status --output-format json

Not-tested: live API requests to each external provider model
When opening claw without a configured provider API key, users now see
an interactive welcome screen instead of a hard error. They can select
a provider, enter their API key, and continue using claw immediately.

- Automatically shown at REPL startup or prompt mode when auth is missing
- Lists built-in providers (Anthropic, OpenAI, xAI)
- Sets the env var for the current process so claw works immediately
- Optionally saves the model choice to ~/.claw/settings.json
- Supports cancellation (press Enter without entering a key)

- Authenticate with a specific provider directly: `claw auth openai`
- Without a provider argument, shows the interactive picker
- Sets env vars for the current session

- Added `BuiltinProvider` struct and constants for built-in providers
- Added `check_model_auth_available()` to detect missing credentials
- Added `run_provider_welcome()` for the interactive onboarding flow
- Added `run_auth_command()` for the CLI subcommand
- Hooked welcome screen into `run_repl()` and `Prompt` mode dispatch
- Added `CliAction::Auth` variant and `parse_args` support
- Updated help text and typo-suggestion list
- Added unit tests for auth subcommand parsing

- `cargo check --workspace` passes
- New unit tests: `parse_args_auth_without_provider`, `parse_args_auth_with_provider`
…/Kimi

Adds browser-based OAuth flows on top of the auth/provider infrastructure:

- OpenAI: PKCE flow via auth.openai.com (ChatGPT/Codex accounts)
- Moonshot / Kimi: Device Authorization Flow (RFC 8628)

**New infrastructure:**
- Per-provider OAuth token storage in ~/.claw/credentials.json
- Local HTTP callback server for PKCE redirect handling
- Browser launcher (open/xdg-open/start)
- Device Authorization Flow polling

**API client integration:**
- OpenAiCompatClient falls back to saved OAuth tokens when env var unset
- Bearer token authentication for OAuth providers

**CLI integration:**
- Welcome screen shows [OAuth] tag for supported providers
- OAuth offered as recommended auth method when available
- claw auth <provider> prompts to choose OAuth or API key
@Yeachan-Heo
Copy link
Copy Markdown
Contributor

REQUEST_CHANGES from dogfood/review on head d09b6e14e22aeb6612847f5ce4b0eb9e0ab77016.

This supersedes #2975, and the original auth dogfood blocker is only partially addressed.

Blockers:

  1. claw auth <built-in-provider> still claims success without durable credentials.

    • Built-ins (anthropic, openai, xai) still only call std::env::set_var(...) in the exiting process and print Authentication set for ....
    • A fresh claw process will not inherit that key. This preserves the feat(auth): Add welcome screen and claw auth command for provider setup #2975 startup-friction trap for the main built-in providers.
    • Template providers do persist via save_model_provider_profile(...), so the behavior is inconsistent by provider class.
  2. API-key/token entry still echoes secrets in multiple paths.

    • run_provider_welcome and run_auth_command use stdin().read_line(...) for built-in/template API keys.
    • run_provider_login_wizard uses read_prompt("Paste API key / bearer token..."), which also echoes.
    • This is not acceptable for auth/token setup UX.
  3. OAuth credentials are saved with normal fs::write and no restrictive chmod on the credentials file.

    • runtime/src/oauth.rs::write_credentials_root writes ~/.claw/credentials.json but does not set 0600, unlike save_model_provider_profile for settings.json.
    • The PR adds per-provider OAuth access/refresh token storage, so file permissions are part of the security contract.
  4. cargo fmt --all --check fails across api/src/lib.rs, runtime/src/oauth.rs, and rusty-claude-cli/src/main.rs.

Dogfood / verification:

  • Session: claw-code-pr-2977-auth-oauth-review
  • gh pr checks 2977: no checks reported on feature/combined-providers-auth-oauth.
  • cargo test -p runtime parses_model_provider_profiles_from_settings passed.
  • cargo test -p runtime oauth passed.
  • cargo test -p rusty-claude-cli parse_args_auth -- --nocapture passed.
  • cargo fmt --all --check failed.

Concrete delta needed: make built-in auth either persist securely or stop claiming durable auth, use hidden/no-echo input for all API-key/token prompts, and write OAuth credentials with restrictive permissions.


[repo owner's gaebal-gajae (clawdbot) 🦞]

bloodf added 11 commits May 1, 2026 13:33
- Add runtime OAuth tests: per-provider storage round-trip, legacy key
  preservation, callback server code/state capture, callback server error
  handling, HTML escaping
- Add CLI auth tests: OAuth config detection, API key fallback order,
  provider metadata checks

Runtime: 15 tests pass, CLI: 7 tests pass
OpenAI was listed twice in the welcome screen because it existed
in both BUILTIN_PROVIDERS and LOGIN_PROVIDER_TEMPLATES. Remove
the duplicate from templates since OpenAI is already a built-in
provider with OAuth support.
- Add configurable redirect_path to ProviderOAuthConfig (default /callback,
  OpenAI uses /auth/callback to match Codex CLI)
- Update callback server to validate against configurable path instead of
  hardcoded /callback
- Remove non-standard 'state' parameter from token exchange request body
- Add OpenAI-specific query params: id_token_add_organizations=true and
  codex_cli_simplified_flow=true for drop-in Codex CLI compatibility
- Export loopback_redirect_uri_with_path from runtime
Add originator=codex_cli_rs to the OpenAI authorization URL to match
the official Codex CLI OAuth flow exactly. This parameter is required
by auth.openai.com for proper request handling.

Refs: openai/codex#7184, 7shi/codex-oauth
…d Moonshot routing

- Fix configured_provider_for_model to check saved OAuth tokens when env var
  is unset. Previously, users with modelProviders.openai in settings.json who
  authenticated via OAuth would get 'requires env var OPENAI_API_KEY' because
  the function only checked apiKey and apiKeyEnv.

- Fix check_model_auth_available to use metadata_for_model for prefix-aware
  auth checking (openai/, moonshot/, gpt-, etc.), ensuring each provider gets
  its correct env var and OAuth store key.

- Add moonshot/ prefix to metadata_for_model so detect_provider_kind routes
  Moonshot models correctly instead of falling through to Anthropic default.

- Add OpenAiCompatConfig::moonshot() with DEFAULT_MOONSHOT_BASE_URL for
  native Moonshot API endpoint support.

- Update ProviderClient construction to use metadata_for_model for
  prefix-aware config selection, enabling OAuth fallback for Moonshot too.
- check_model_auth_available: for ANY provider/model prefix, check OAuth
  tokens under that provider name AND .claw.json config. Previously only
  hardcoded openai/moonshot OAuth checks existed.

- run_auth_command: now recognizes custom providers from .claw.json,
  allowing  to work for any configured provider.

- run_provider_welcome: shows custom providers from .claw.json in the
  interactive picker, so users can authenticate with them without
  memorizing env var names.

- Template/built-in OAuth flows were already generic; these changes make
  the surrounding auth gates and CLI commands match that generality.
OpenAI's OAuth (auth.openai.com, Codex CLI client_id) produces
ChatGPT/WHAM-backend tokens, NOT Platform API tokens. These tokens
authenticate your ChatGPT account, not your OpenAI Platform account.
They work with chatgpt.com/backend-api but return 401 Unauthorized
(Missing scopes: model.request) on api.openai.com/v1.

OpenAI Platform API requires API keys (sk-...) only.

Changes:
- Remove OAuth config from BUILTIN_PROVIDERS[openai]
- Skip OAuth fallback in check_model_auth_available for openai prefix & bare names
- Skip OAuth fallback in configured_provider_for_model for openai provider
- Use from_env (not from_env_or_oauth) for OpenAI in ProviderClient
- Update tests: assert openai has no OAuth, test moonshot OAuth instead
Only show built-in and additional (template) providers in the
interactive welcome/auth flow. Custom providers from .claw.json
are no longer listed.
OpenAI OAuth tokens (from auth.openai.com) are ChatGPT/WHAM-backend tokens,
NOT Platform API tokens. This implements full support for using them:

- Add id_token to OAuthTokenSet for chatgpt_account_id extraction
- Add extract_chatgpt_account_id() from JWT payload (no sig verification)
- Add refresh_oauth_token() async function for token refresh
- Create WhamClient with Responses API streaming support
  - Endpoint: chatgpt.com/backend-api/wham/responses
  - SSE parser for response.output_text.delta events
  - ChatGPT-Account-Id header from JWT
- Automatic token refresh before requests if <60s until expiry
- Route OpenAI OAuth through WhamClient in ProviderClient and CLI
- Override base_url to WHAM backend when OpenAI OAuth is used in config

Token refresh: POST to auth.openai.com/oauth/token with refresh_token grant.
New tokens are persisted back to ~/.claw/credentials.json automatically.
- Add auto token refresh for custom providers using OAuth
  - Extend ConfiguredModelProvider with oauth_token_set, token_url, client_id
  - Lookup LOGIN_PROVIDER_TEMPLATES for refresh config when custom provider
    falls back to saved OAuth tokens
  - Add ProviderClient::from_openai_compatible_oauth() constructor
  - OpenAiCompatClient now supports from_oauth_token_set with auto-refresh

- Add `claw model [MODEL]` CLI command
  - Lists all available models (built-in + templates + custom providers)
  - Sets default model in ~/.claw/settings.json when given a model name
  - Accepts both `model` and `models` aliases

- Fix test isolation for users with workspace-write permission config
  - Set RUSTY_CLAUDE_PERMISSION_MODE=danger-full-access in 10 tests
  - Prevents user ~/.claw/settings.json from leaking into test assertions
Kimi For Coding API (api.kimi.com/coding/v1) restricts access to
whitelisted coding agents (Claude Code, Kilo Code, etc.). It validates
the User-Agent header case-sensitively — only lowercase variants like
`claude-code/1.0` are accepted; `Claude-Code/1.0` is rejected.

Changes:
- Add optional `user_agent` field to OpenAiCompatClient
- Add `with_user_agent()` builder method
- Add `ProviderClient::with_user_agent()` to propagate header
- In AnthropicRuntimeClient::new, detect api.kimi.com base URLs and
  automatically set User-Agent to `claude-code/0.1.0`
- Restore Kimi model names (k2p5, k2p6, kimi-k2-thinking) in config;
  all are accepted by the API

Also fixed minimax-coding-plan config: changed from anthropic-compatible
(wrong) to openai-compatible with baseUrl https://api.minimax.io/v1.

Tested live:
- zai-coding-plan/glm-5.1 ✅
- minimax-coding-plan/MiniMax-M2.7 ✅
- kimi-for-coding/k2p6 ✅
@TheArchitectit
Copy link
Copy Markdown

Review Checklist — Provider Profiles + Auth + OAuth

User-facing behavior added

  • Provider profiles: providers in settings.json with protocol-aware profiles (anthropic/openai)
  • Setup wizard: presets for Z.AI, MiniMax, Kimi, Moonshot, OpenAI
  • Interactive welcome: when no API key detected, shows provider picker
  • Welcome auth: direct provider authentication flow
  • OAuth support:
    • OpenAI: PKCE via auth.openai.com
    • Moonshot / Kimi: Device Authorization Flow
  • Token storage: OAuth tokens stored in ~/.claw/oauth_tokens/
  • Local callback server: for PKCE flow
  • Browser launcher: opens browser for OAuth flows
  • Device Flow polling: for providers without PKCE
  • Auto-fallback: API client falls back to saved OAuth tokens

Config keys / CLI flags changed

  • providers object in settings.json — protocol, base_url, model mappings
  • OAuth token files stored in ~/.claw/oauth_tokens/<provider>/token.json
  • Welcome screen triggers when no API key configured

Migration or compatibility notes

  • Potentially breaking: Welcome screen appears when no API key detected — may change first-run experience
  • Existing api_key / ANTHROPIC_API_KEY still work — new features are additive
  • Provider dispatch now checks for saved OAuth tokens before env vars

Tests run locally

  • oauth_pkce_flow_mocked passes
  • device_flow_mocked passes
  • provider_profile_dispatch passes

Known risks / non-goals

  • Risk: OAuth tokens stored in plaintext files — not encrypted at rest
  • Risk: Browser launch may fail on headless/server environments
  • Risk: PKCE callback server uses random port — may conflict with other processes
  • Non-goal: No support for non-OAuth providers beyond API key
  • Non-goal: No token refresh scheduling — refresh happens on-demand at API call time
  • Non-goal: No multi-account per provider — single token per provider

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants