Skip to content

fix(auto-monitor-setup): update new config#166

Open
galzilber wants to merge 1 commit intomainfrom
gz/new-docs-for-auto-monitor
Open

fix(auto-monitor-setup): update new config#166
galzilber wants to merge 1 commit intomainfrom
gz/new-docs-for-auto-monitor

Conversation

@galzilber
Copy link
Copy Markdown
Contributor

@galzilber galzilber commented Apr 16, 2026

Summary by CodeRabbit

  • New Features

    • Added configurable execution scope for each evaluator (session, trace, or span).
    • Introduced new evaluator configuration option for greater flexibility.
  • API Changes

    • Evaluators field is now optional (previously required).
    • New evaluator configuration method takes precedence when both options are provided.
    • Enhanced validation with more specific error messages for configuration issues.

@mintlify
Copy link
Copy Markdown

mintlify Bot commented Apr 16, 2026

Preview deployment for your docs. Learn more about Mintlify Previews.

Project Status Preview Updated (UTC)
enrolla 🟢 Ready View Preview Apr 16, 2026, 11:23 AM

💡 Tip: Enable Workflows to automatically generate PRs for you.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 16, 2026

📝 Walkthrough

Walkthrough

These changes introduce a new evaluator_configs field to auto-monitor setup APIs, allowing optional per-evaluator configuration with execution scope control. The evaluators field becomes optional, with evaluator_configs taking precedence when both are provided. Documentation examples and validation error handling have been updated accordingly.

Changes

Cohort / File(s) Summary
Evaluator Configuration Enhancement
api-reference/auto-monitor-setups/create-an-auto-monitor-setup.mdx, openapi.json
Added evaluator_configs field supporting per-evaluator scope control (session|trace|span). Made evaluators optional in create and update endpoints; evaluator_configs takes precedence when both provided. Updated request/response schemas, documentation examples, and validation error cases to reflect new precedence rules and configuration options.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~15 minutes

Poem

🐰 Hop along, dear evaluators bright,
With scopes now fine-tuned, just right!
Configs take the stage, optional ways,
More flexible monitoring in all our days.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'fix(auto-monitor-setup): update new config' is vague and does not clearly describe the main changes, which involve making evaluators optional and introducing a new evaluator_configs field with scope support. Revise the title to be more descriptive of the key changes, such as 'feat(auto-monitor-setup): make evaluators optional and add evaluator_configs' or similar.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch gz/new-docs-for-auto-monitor

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
api-reference/auto-monitor-setups/create-an-auto-monitor-setup.mdx (1)

147-163: ⚠️ Potential issue | 🟡 Minor

Response example is internally inconsistent with the shown request examples.

Line 150 adds "scope": "trace" to answer-relevancy, but neither request example produces that exact evaluator set + scope combination. Please align the response sample with one concrete request example.

✏️ Minimal fix (align with first request example)
     {
       "evaluator_type": "answer-relevancy",
-      "scope": "trace",
       "input_schema": [
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api-reference/auto-monitor-setups/create-an-auto-monitor-setup.mdx` around
lines 147 - 163, The response example is inconsistent: the evaluator object with
"evaluator_type": "answer-relevancy" includes "scope": "trace" but the first
request example does not; update the response sample to match the first request
by removing the "scope": "trace" property (or setting it to the same scope used
in that first request) for the "answer-relevancy" evaluator so the returned
evaluators array matches the request example; locate the evaluator object with
"evaluator_type": "answer-relevancy" in the response sample and adjust its
"scope" accordingly.
openapi.json (1)

3038-3077: ⚠️ Potential issue | 🟠 Major

Enforce the create-time evaluator requirement in the schema (not only in description).

The description says create requires at least one of evaluators or evaluator_configs, but the schema only requires external_id. Also, evaluator_configs is missing minItems, so an empty array can pass schema validation.

🛠️ Proposed schema fix
       "request.CreateAutoMonitorSetupInput": {
         "description": "At least one of `evaluators` or `evaluator_configs` must be provided. If both are provided, `evaluator_configs` wins and `evaluators` is ignored.",
         "properties": {
           "evaluators": {
             "description": "List of evaluator slugs to run on matched spans. Use `evaluator_configs` instead when you need to set a per-evaluator scope. If both fields are provided, `evaluator_configs` takes precedence and this field is ignored.",
             "example": [
               "hallucination",
               "toxicity"
             ],
             "items": {
               "type": "string"
             },
             "minItems": 1,
             "type": "array"
           },
           "evaluator_configs": {
             "description": "List of per-evaluator configurations. Each entry declares an evaluator slug and an optional scope (`session`, `trace`, or `span`) controlling the granularity at which the evaluator runs. Takes precedence over `evaluators` when both are provided.",
             "example": [
               { "slug": "char-count", "scope": "session" },
               { "slug": "toxicity", "scope": "trace" },
               { "slug": "pii" }
             ],
             "items": {
               "$ref": "#/components/schemas/request.AutoMonitorEvaluatorConfig"
             },
+            "minItems": 1,
             "type": "array"
           },
           "external_id": {
             "description": "Unique identifier for the auto monitor setup, used to reference it in future requests",
             "example": "my-agent-monitor-1",
             "type": "string"
           },
           "selector": {
             "description": "Map of span attributes to filter which spans this monitor applies to.\nKeys are span attribute names (e.g. gen_ai.system, gen_ai.request.model) and\nvalues can be strings, numbers, or booleans.\nExample: {\"gen_ai.system\": \"openai\", \"gen_ai.request.model\": \"gpt-4o\", \"gen_ai.request.max_tokens\": 1000}",
             "type": "object"
           }
         },
+        "anyOf": [
+          { "required": ["evaluators"] },
+          { "required": ["evaluator_configs"] }
+        ],
         "required": [
           "external_id"
         ],
         "type": "object"
       },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@openapi.json` around lines 3038 - 3077, The schema currently only requires
external_id and allows empty evaluator arrays; add validation to require at
least one non-empty evaluator field by (1) adding "minItems": 1 to the
"evaluator_configs" property (matching the existing "evaluators" minItems) and
(2) add an "anyOf" at the object level alongside the existing "required":
["external_id"] that enforces either "evaluators" or "evaluator_configs" is
present (e.g. anyOf: [{required: ["evaluators"]}, {required:
["evaluator_configs"]}); because both arrays now have minItems: 1 this ensures
at create-time at least one non-empty list is provided while preserving the
description that evaluator_configs wins when both are present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@api-reference/auto-monitor-setups/create-an-auto-monitor-setup.mdx`:
- Around line 147-163: The response example is inconsistent: the evaluator
object with "evaluator_type": "answer-relevancy" includes "scope": "trace" but
the first request example does not; update the response sample to match the
first request by removing the "scope": "trace" property (or setting it to the
same scope used in that first request) for the "answer-relevancy" evaluator so
the returned evaluators array matches the request example; locate the evaluator
object with "evaluator_type": "answer-relevancy" in the response sample and
adjust its "scope" accordingly.

In `@openapi.json`:
- Around line 3038-3077: The schema currently only requires external_id and
allows empty evaluator arrays; add validation to require at least one non-empty
evaluator field by (1) adding "minItems": 1 to the "evaluator_configs" property
(matching the existing "evaluators" minItems) and (2) add an "anyOf" at the
object level alongside the existing "required": ["external_id"] that enforces
either "evaluators" or "evaluator_configs" is present (e.g. anyOf: [{required:
["evaluators"]}, {required: ["evaluator_configs"]}); because both arrays now
have minItems: 1 this ensures at create-time at least one non-empty list is
provided while preserving the description that evaluator_configs wins when both
are present.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: cad6a20b-8107-4523-956e-56c7a4a8fe0d

📥 Commits

Reviewing files that changed from the base of the PR and between cdd8418 and 50dda39.

📒 Files selected for processing (2)
  • api-reference/auto-monitor-setups/create-an-auto-monitor-setup.mdx
  • openapi.json

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants