Conversation
|
Preview deployment for your docs. Learn more about Mintlify Previews.
💡 Tip: Enable Workflows to automatically generate PRs for you. |
📝 WalkthroughWalkthroughThese changes introduce a new Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~15 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
api-reference/auto-monitor-setups/create-an-auto-monitor-setup.mdx (1)
147-163:⚠️ Potential issue | 🟡 MinorResponse example is internally inconsistent with the shown request examples.
Line 150 adds
"scope": "trace"toanswer-relevancy, but neither request example produces that exact evaluator set + scope combination. Please align the response sample with one concrete request example.✏️ Minimal fix (align with first request example)
{ "evaluator_type": "answer-relevancy", - "scope": "trace", "input_schema": [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api-reference/auto-monitor-setups/create-an-auto-monitor-setup.mdx` around lines 147 - 163, The response example is inconsistent: the evaluator object with "evaluator_type": "answer-relevancy" includes "scope": "trace" but the first request example does not; update the response sample to match the first request by removing the "scope": "trace" property (or setting it to the same scope used in that first request) for the "answer-relevancy" evaluator so the returned evaluators array matches the request example; locate the evaluator object with "evaluator_type": "answer-relevancy" in the response sample and adjust its "scope" accordingly.openapi.json (1)
3038-3077:⚠️ Potential issue | 🟠 MajorEnforce the create-time evaluator requirement in the schema (not only in description).
The description says create requires at least one of
evaluatorsorevaluator_configs, but the schema only requiresexternal_id. Also,evaluator_configsis missingminItems, so an empty array can pass schema validation.🛠️ Proposed schema fix
"request.CreateAutoMonitorSetupInput": { "description": "At least one of `evaluators` or `evaluator_configs` must be provided. If both are provided, `evaluator_configs` wins and `evaluators` is ignored.", "properties": { "evaluators": { "description": "List of evaluator slugs to run on matched spans. Use `evaluator_configs` instead when you need to set a per-evaluator scope. If both fields are provided, `evaluator_configs` takes precedence and this field is ignored.", "example": [ "hallucination", "toxicity" ], "items": { "type": "string" }, "minItems": 1, "type": "array" }, "evaluator_configs": { "description": "List of per-evaluator configurations. Each entry declares an evaluator slug and an optional scope (`session`, `trace`, or `span`) controlling the granularity at which the evaluator runs. Takes precedence over `evaluators` when both are provided.", "example": [ { "slug": "char-count", "scope": "session" }, { "slug": "toxicity", "scope": "trace" }, { "slug": "pii" } ], "items": { "$ref": "#/components/schemas/request.AutoMonitorEvaluatorConfig" }, + "minItems": 1, "type": "array" }, "external_id": { "description": "Unique identifier for the auto monitor setup, used to reference it in future requests", "example": "my-agent-monitor-1", "type": "string" }, "selector": { "description": "Map of span attributes to filter which spans this monitor applies to.\nKeys are span attribute names (e.g. gen_ai.system, gen_ai.request.model) and\nvalues can be strings, numbers, or booleans.\nExample: {\"gen_ai.system\": \"openai\", \"gen_ai.request.model\": \"gpt-4o\", \"gen_ai.request.max_tokens\": 1000}", "type": "object" } }, + "anyOf": [ + { "required": ["evaluators"] }, + { "required": ["evaluator_configs"] } + ], "required": [ "external_id" ], "type": "object" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openapi.json` around lines 3038 - 3077, The schema currently only requires external_id and allows empty evaluator arrays; add validation to require at least one non-empty evaluator field by (1) adding "minItems": 1 to the "evaluator_configs" property (matching the existing "evaluators" minItems) and (2) add an "anyOf" at the object level alongside the existing "required": ["external_id"] that enforces either "evaluators" or "evaluator_configs" is present (e.g. anyOf: [{required: ["evaluators"]}, {required: ["evaluator_configs"]}); because both arrays now have minItems: 1 this ensures at create-time at least one non-empty list is provided while preserving the description that evaluator_configs wins when both are present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@api-reference/auto-monitor-setups/create-an-auto-monitor-setup.mdx`:
- Around line 147-163: The response example is inconsistent: the evaluator
object with "evaluator_type": "answer-relevancy" includes "scope": "trace" but
the first request example does not; update the response sample to match the
first request by removing the "scope": "trace" property (or setting it to the
same scope used in that first request) for the "answer-relevancy" evaluator so
the returned evaluators array matches the request example; locate the evaluator
object with "evaluator_type": "answer-relevancy" in the response sample and
adjust its "scope" accordingly.
In `@openapi.json`:
- Around line 3038-3077: The schema currently only requires external_id and
allows empty evaluator arrays; add validation to require at least one non-empty
evaluator field by (1) adding "minItems": 1 to the "evaluator_configs" property
(matching the existing "evaluators" minItems) and (2) add an "anyOf" at the
object level alongside the existing "required": ["external_id"] that enforces
either "evaluators" or "evaluator_configs" is present (e.g. anyOf: [{required:
["evaluators"]}, {required: ["evaluator_configs"]}); because both arrays now
have minItems: 1 this ensures at create-time at least one non-empty list is
provided while preserving the description that evaluator_configs wins when both
are present.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cad6a20b-8107-4523-956e-56c7a4a8fe0d
📒 Files selected for processing (2)
api-reference/auto-monitor-setups/create-an-auto-monitor-setup.mdxopenapi.json
Summary by CodeRabbit
New Features
API Changes