Skip to content

MON-4029: Add KubeStateMetricsConfig to ClusterMonitoring API#2778

Open
danielmellado wants to merge 1 commit intoopenshift:masterfrom
danielmellado:mon_4029_kube_state_metrics
Open

MON-4029: Add KubeStateMetricsConfig to ClusterMonitoring API#2778
danielmellado wants to merge 1 commit intoopenshift:masterfrom
danielmellado:mon_4029_kube_state_metrics

Conversation

@danielmellado
Copy link
Copy Markdown
Contributor

Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.

Signed-off-by: Daniel Mellado dmellado@redhat.com

@openshift-ci-robot
Copy link
Copy Markdown

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 24, 2026
@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Mar 24, 2026

@danielmellado: This pull request references MON-4029 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.

Signed-off-by: Daniel Mellado dmellado@redhat.com

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Mar 24, 2026

Hello @danielmellado! Some important instructions when contributing to openshift/api:
API design plays an important part in the user experience of OpenShift and as such API PRs are subject to a high level of scrutiny to ensure they follow our best practices. If you haven't already done so, please review the OpenShift API Conventions and ensure that your proposed changes are compliant. Following these conventions will help expedite the api review process for your PR.

@openshift-ci openshift-ci Bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Mar 24, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Mar 24, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign deads2k for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@danielmellado
Copy link
Copy Markdown
Contributor Author

@everettraven this PR continues #2461 and rebases and adds your comments. Thanks!

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Added an optional kubeStateMetricsConfig field to ClusterMonitoringSpec and new exported types: KubeStateMetricsConfig, KubeStateMetricsResourceLabels, KubeStateMetricsResourceName, and KubeStateMetricsLabelName plus related constants. Generated deepcopy and Swagger doc methods for the new types. Extended the ClusterMonitoring CRD schema with validation for nodeSelector, resources, tolerations, topologySpreadConstraints, and additionalResourceLabels. Added extensive YAML tests covering valid and invalid spec.kubeStateMetricsConfig cases.

🚥 Pre-merge checks | ✅ 11 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Test Structure And Quality ⚠️ Warning Ginkgo test framework in tests/generator.go lacks assertion messages on cluster operations. Lines 137, 178, 235, 270, 298, 332 have Expect calls without descriptive failure messages. Add meaningful failure messages to Expect assertions on k8sClient operations in generator.go. Change Expect(k8sClient.Get(...)) to Expect(k8sClient.Get(...), "description")
✅ Passed checks (11 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and specifically summarizes the main change—adding KubeStateMetricsConfig to the ClusterMonitoring API, which is the core focus of all file changes.
Description check ✅ Passed The description is directly related to the changeset, accurately describing the addition of KubeStateMetricsConfig struct and mentioning key features (node selectors, resources, tolerations, topology spread constraints) plus comprehensive tests.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed All 165 test names in YAML test file are stable and deterministic. No dynamic content (pod names, timestamps, UUIDs, IPs) found. All names use clear descriptive verbs and are not test-run specific.
Microshift Test Compatibility ✅ Passed No Ginkgo e2e tests are added in this PR. Changes are limited to API type definitions, generated code, YAML validation tests, and CRD manifests. MicroShift compatibility check not applicable.
Single Node Openshift (Sno) Test Compatibility ✅ Passed No multi-node SNO incompatibilities found. The PR adds YAML-based CRD schema validation tests that run against single-node envtest environment. Tests validate field constraints only.
Topology-Aware Scheduling Compatibility ✅ Passed PR adds optional KubeStateMetricsConfig to openshift/api CRD. All scheduling fields are optional with platform defaults. No hardcoded topology-unaware constraints introduced.
Ote Binary Stdout Contract ✅ Passed No OTE Stdout Contract violations detected. Changes limited to type definitions, auto-generated code, CRD manifests, and YAML test cases. No stdout writes, klog, or init() functions found.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed The PR adds declarative CRD validation tests (YAML), not Ginkgo e2e tests. No IPv4 assumptions, hardcoded IPs, or external connectivity requirements detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 1117-1234: The CRD currently documents invariants but doesn't
enforce them; update the schema for the PodTopologySpread-like object to:
restrict whenUnsatisfiable to an enum of allowed values (e.g.,
"DoNotSchedule","ScheduleAnyway") on the whenUnsatisfiable property, add
minimum: 1 (and disallow 0) on maxSkew (format int32) and minimum: 1 on
minDomains (or use minimum: 1 when present), and add a validation rule that
requires labelSelector when matchLabelKeys is set (make matchLabelKeys mutually
exclusive with labelSelector and/or add a x-kubernetes-requirements or OpenAPI
dependency that enforces labelSelector presence when matchLabelKeys exists).
Target the properties named whenUnsatisfiable, maxSkew, minDomains,
matchLabelKeys and labelSelector in the CRD fragment to implement these
constraints.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: a91c322b-92fd-43a0-a96f-b2b4da86aa91

📥 Commits

Reviewing files that changed from the base of the PR and between 324a1bc and 02919b5.

⛔ Files ignored due to path filters (3)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**
📒 Files selected for processing (5)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • config/v1alpha1/zz_generated.deepcopy.go
  • config/v1alpha1/zz_generated.swagger_doc_generated.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

Comment on lines +1117 to +1234
matchLabelKeys:
description: |-
MatchLabelKeys is a set of pod label keys to select the pods over which
spreading will be calculated. The keys are used to lookup values from the
incoming pod labels, those key-value labels are ANDed with labelSelector
to select the group of existing pods over which spreading will be calculated
for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector.
MatchLabelKeys cannot be set when LabelSelector isn't set.
Keys that don't exist in the incoming pod labels will
be ignored. A null or empty list means only match against labelSelector.

This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default).
items:
type: string
type: array
x-kubernetes-list-type: atomic
maxSkew:
description: |-
MaxSkew describes the degree to which pods may be unevenly distributed.
When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference
between the number of matching pods in the target topology and the global minimum.
The global minimum is the minimum number of matching pods in an eligible domain
or zero if the number of eligible domains is less than MinDomains.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 2/2/1:
In this case, the global minimum is 1.
| zone1 | zone2 | zone3 |
| P P | P P | P |
- if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2;
scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2)
violate MaxSkew(1).
- if MaxSkew is 2, incoming pod can be scheduled onto any zone.
When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence
to topologies that satisfy it.
It's a required field. Default value is 1 and 0 is not allowed.
format: int32
type: integer
minDomains:
description: |-
MinDomains indicates a minimum number of eligible domains.
When the number of eligible domains with matching topology keys is less than minDomains,
Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed.
And when the number of eligible domains with matching topology keys equals or greater than minDomains,
this value has no effect on scheduling.
As a result, when the number of eligible domains is less than minDomains,
scheduler won't schedule more than maxSkew Pods to those domains.
If value is nil, the constraint behaves as if MinDomains is equal to 1.
Valid values are integers greater than 0.
When value is not nil, WhenUnsatisfiable must be DoNotSchedule.

For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same
labelSelector spread as 2/2/2:
| zone1 | zone2 | zone3 |
| P P | P P | P P |
The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0.
In this situation, new pod with the same labelSelector cannot be scheduled,
because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,
it will violate MaxSkew.
format: int32
type: integer
nodeAffinityPolicy:
description: |-
NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector
when calculating pod topology spread skew. Options are:
- Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.

If this value is nil, the behavior is equivalent to the Honor policy.
type: string
nodeTaintsPolicy:
description: |-
NodeTaintsPolicy indicates how we will treat node taints when calculating
pod topology spread skew. Options are:
- Honor: nodes without taints, along with tainted nodes for which the incoming pod
has a toleration, are included.
- Ignore: node taints are ignored. All nodes are included.

If this value is nil, the behavior is equivalent to the Ignore policy.
type: string
topologyKey:
description: |-
TopologyKey is the key of node labels. Nodes that have a label with this key
and identical values are considered to be in the same topology.
We consider each <key, value> as a "bucket", and try to put balanced number
of pods into each bucket.
We define a domain as a particular instance of a topology.
Also, we define an eligible domain as a domain whose nodes meet the requirements of
nodeAffinityPolicy and nodeTaintsPolicy.
e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology.
And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology.
It's a required field.
type: string
whenUnsatisfiable:
description: |-
WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
the spread constraint.
- DoNotSchedule (default) tells the scheduler not to schedule it.
- ScheduleAnyway tells the scheduler to schedule the pod in any location,
but giving higher precedence to topologies that would help reduce the
skew.
A constraint is considered "Unsatisfiable" for an incoming pod
if and only if every possible node assignment for that pod would violate
"MaxSkew" on some topology.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 3/1/1:
| zone1 | zone2 | zone3 |
| P P P | P | P |
If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled
to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies
MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler
won't make it *more* imbalanced.
It's a required field.
type: string
required:
- maxSkew
- topologyKey
- whenUnsatisfiable
type: object
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Enforce documented topology constraint invariants in the schema

The descriptions document hard constraints, but the schema does not enforce several of them (whenUnsatisfiable allowed values, positive maxSkew/minDomains, and matchLabelKeys requiring labelSelector). Invalid CRs can pass admission and then fail later during reconciliation/pod creation.

🔧 Proposed schema validation additions
                         maxSkew:
                           description: |-
                             MaxSkew describes the degree to which pods may be unevenly distributed.
@@
                           format: int32
+                          minimum: 1
                           type: integer
                         minDomains:
                           description: |-
                             MinDomains indicates a minimum number of eligible domains.
@@
                           format: int32
+                          minimum: 1
                           type: integer
@@
                         whenUnsatisfiable:
                           description: |-
                             WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
@@
+                          enum:
+                          - DoNotSchedule
+                          - ScheduleAnyway
                           type: string
+                      x-kubernetes-validations:
+                      - message: minDomains can only be set when whenUnsatisfiable is DoNotSchedule
+                        rule: '!has(self.minDomains) || self.whenUnsatisfiable == ''DoNotSchedule'''
+                      - message: matchLabelKeys cannot be set when labelSelector is not set
+                        rule: '!has(self.matchLabelKeys) || has(self.labelSelector)'
                       required:
                       - maxSkew
                       - topologyKey
                       - whenUnsatisfiable

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`
around lines 1117 - 1234, The CRD currently documents invariants but doesn't
enforce them; update the schema for the PodTopologySpread-like object to:
restrict whenUnsatisfiable to an enum of allowed values (e.g.,
"DoNotSchedule","ScheduleAnyway") on the whenUnsatisfiable property, add
minimum: 1 (and disallow 0) on maxSkew (format int32) and minimum: 1 on
minDomains (or use minimum: 1 when present), and add a validation rule that
requires labelSelector when matchLabelKeys is set (make matchLabelKeys mutually
exclusive with labelSelector and/or add a x-kubernetes-requirements or OpenAPI
dependency that enforces labelSelector presence when matchLabelKeys exists).
Target the properties named whenUnsatisfiable, maxSkew, minDomains,
matchLabelKeys and labelSelector in the CRD fragment to implement these
constraints.

@openshift-ci openshift-ci Bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 28, 2026
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from 02919b5 to 361dc49 Compare April 7, 2026 13:37
@openshift-ci openshift-ci Bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 7, 2026
@danielmellado
Copy link
Copy Markdown
Contributor Author

/hold until openshift/cluster-monitoring-operator#2553 merges

@openshift-ci openshift-ci Bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 7, 2026
@danielmellado
Copy link
Copy Markdown
Contributor Author

/cc @rexagod

@openshift-ci openshift-ci Bot requested a review from rexagod April 7, 2026 13:38
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from 361dc49 to d134762 Compare April 17, 2026 12:56
@danielmellado
Copy link
Copy Markdown
Contributor Author

/unhold

@openshift-ci openshift-ci Bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 17, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml (1)

1175-1292: ⚠️ Potential issue | 🟠 Major

Enforce documented topology spread invariants in schema (still missing).

This block documents constraints, but schema-level enforcement is still incomplete (whenUnsatisfiable values, positive maxSkew/minDomains, and matchLabelKeys requiring labelSelector). Invalid CRs can pass admission and fail later.

🔧 Proposed schema validation patch
                         maxSkew:
@@
                           format: int32
+                          minimum: 1
                           type: integer
                         minDomains:
@@
                           format: int32
+                          minimum: 1
                           type: integer
@@
                         whenUnsatisfiable:
@@
+                          enum:
+                          - DoNotSchedule
+                          - ScheduleAnyway
                           type: string
                       required:
                       - maxSkew
                       - topologyKey
                       - whenUnsatisfiable
                       type: object
+                      x-kubernetes-validations:
+                      - message: minDomains can only be set when whenUnsatisfiable is DoNotSchedule
+                        rule: '!has(self.minDomains) || self.whenUnsatisfiable == ''DoNotSchedule'''
+                      - message: matchLabelKeys cannot be set when labelSelector is not set
+                        rule: '!has(self.matchLabelKeys) || has(self.labelSelector)'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`
around lines 1175 - 1292, The CRD currently documents invariants but doesn't
enforce them; update the CRD's openAPIV3Schema to (1) restrict whenUnsatisfiable
to the allowed enum values (e.g., "DoNotSchedule","ScheduleAnyway"), (2) add
minimum: 1 for maxSkew and minDomains (and type integer) so maxSkew cannot be
0/negative and minDomains > 0, (3) add a conditional validation that when
minDomains is set then whenUnsatisfiable must equal "DoNotSchedule", and (4) add
validations requiring that matchLabelKeys cannot be present unless labelSelector
is set and that keys in matchLabelKeys do not overlap with labelSelector keys;
implement the conditional/complex checks using CRD CEL
(x-kubernetes-validations) or equivalent CRD validation mechanisms referencing
the fields matchLabelKeys, labelSelector, maxSkew, minDomains, and
whenUnsatisfiable so invalid CRs are rejected at admission time.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 2145-2158: The test "Should reject KubeStateMetricsConfig with
duplicate resource in additionalResourceLabels" uses an invalid resource value
"jobs" which triggers enum validation before duplicate detection; update the two
entries under spec.kubeStateMetricsConfig.additionalResourceLabels to use a
valid resource (for example "pods" or another accepted enum value) so the YAML
passes resource validation and the duplicate-resource check is exercised by
ClusterMonitoring validation.

---

Duplicate comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 1175-1292: The CRD currently documents invariants but doesn't
enforce them; update the CRD's openAPIV3Schema to (1) restrict whenUnsatisfiable
to the allowed enum values (e.g., "DoNotSchedule","ScheduleAnyway"), (2) add
minimum: 1 for maxSkew and minDomains (and type integer) so maxSkew cannot be
0/negative and minDomains > 0, (3) add a conditional validation that when
minDomains is set then whenUnsatisfiable must equal "DoNotSchedule", and (4) add
validations requiring that matchLabelKeys cannot be present unless labelSelector
is set and that keys in matchLabelKeys do not overlap with labelSelector keys;
implement the conditional/complex checks using CRD CEL
(x-kubernetes-validations) or equivalent CRD validation mechanisms referencing
the fields matchLabelKeys, labelSelector, maxSkew, minDomains, and
whenUnsatisfiable so invalid CRs are rejected at admission time.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 5847c428-e700-46b3-84f5-f167574373e9

📥 Commits

Reviewing files that changed from the base of the PR and between 361dc49 and d134762.

⛔ Files ignored due to path filters (6)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
  • openapi/openapi.json is excluded by !openapi/**
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • config/v1alpha1/types_cluster_monitoring.go

@rexagod
Copy link
Copy Markdown
Member

rexagod commented Apr 22, 2026

@danielmellado could you please rebase this? Thanks again for the patch! :)

@openshift-ci openshift-ci Bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 22, 2026
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from d134762 to d1719c1 Compare April 23, 2026 10:21
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml (1)

2231-2506: Add explicit negative tests for nodeSelector and tolerations in kubeStateMetricsConfig.

nodeSelector and tolerations are only exercised in the all-fields happy path (Line 2237 and Line 2243). Unlike sibling configs in this file, there are no reject cases for empty/oversized values, so regressions in those validations could slip through.

🧪 Suggested test additions
+    - name: Should reject KubeStateMetricsConfig with empty nodeSelector
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          kubeStateMetricsConfig:
+            nodeSelector: {}
+      expectedError: 'spec.kubeStateMetricsConfig.nodeSelector: Invalid value: 0: spec.kubeStateMetricsConfig.nodeSelector in body should have at least 1 properties'
+
+    - name: Should reject KubeStateMetricsConfig with more than 10 nodeSelector entries
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          kubeStateMetricsConfig:
+            nodeSelector:
+              key1: val1
+              key2: val2
+              key3: val3
+              key4: val4
+              key5: val5
+              key6: val6
+              key7: val7
+              key8: val8
+              key9: val9
+              key10: val10
+              key11: val11
+      expectedError: 'spec.kubeStateMetricsConfig.nodeSelector: Too many: 11: must have at most 10 items'
+
+    - name: Should reject KubeStateMetricsConfig with empty tolerations array
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          kubeStateMetricsConfig:
+            tolerations: []
+      expectedError: 'spec.kubeStateMetricsConfig.tolerations: Invalid value: 0: spec.kubeStateMetricsConfig.tolerations in body should have at least 1 items'
+
+    - name: Should reject KubeStateMetricsConfig with more than 10 tolerations
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          kubeStateMetricsConfig:
+            tolerations:
+              - key: "key1"
+                operator: "Exists"
+              - key: "key2"
+                operator: "Exists"
+              - key: "key3"
+                operator: "Exists"
+              - key: "key4"
+                operator: "Exists"
+              - key: "key5"
+                operator: "Exists"
+              - key: "key6"
+                operator: "Exists"
+              - key: "key7"
+                operator: "Exists"
+              - key: "key8"
+                operator: "Exists"
+              - key: "key9"
+                operator: "Exists"
+              - key: "key10"
+                operator: "Exists"
+              - key: "key11"
+                operator: "Exists"
+      expectedError: 'spec.kubeStateMetricsConfig.tolerations: Too many: 11: must have at most 10 items'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`
around lines 2231 - 2506, The test suite lacks negative cases for
kubeStateMetricsConfig.nodeSelector and kubeStateMetricsConfig.tolerations; add
reject tests that mirror existing patterns: (1) a case where
kubeStateMetricsConfig.nodeSelector is {} and assert an expectedError about
requiring at least one property; (2) a case where
kubeStateMetricsConfig.nodeSelector has an empty key or invalid value (if schema
enforces non-empty strings); (3) a case where kubeStateMetricsConfig.tolerations
is [] and assert an expectedError about requiring at least one item; and (4) a
case with too many tolerations (or duplicate tolerations) to assert the "Too
many" or "Duplicate value" errors; reference the kubeStateMetricsConfig,
nodeSelector, and tolerations keys when adding these new test entries consistent
with the surrounding test format.
config/v1alpha1/types_cluster_monitoring.go (1)

2502-2568: Consider extracting the shared scheduling/resources shape.

This block re-declares the same nodeSelector / resources / tolerations / topologySpreadConstraints shape already present in OpenShiftStateMetricsConfig. Pulling the common part into a shared struct would reduce drift in docs and validation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config/v1alpha1/types_cluster_monitoring.go` around lines 2502 - 2568,
KubeStateMetricsConfig duplicates the scheduling/resources fields from
OpenShiftStateMetricsConfig; extract a shared struct (e.g.,
PodSchedulingAndResources) containing NodeSelector, Resources, Tolerations, and
TopologySpreadConstraints and replace those fields in both
KubeStateMetricsConfig and OpenShiftStateMetricsConfig with a single
embedded/shared field of that new struct; update kubebuilder tags and JSON names
on the new struct to preserve validation (Min/MaxItems, listType/listMapKey,
optional, etc.) and adjust any references to the original fields in
code/comments to use the new struct.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@config/v1alpha1/types_cluster_monitoring.go`:
- Around line 2601-2627: KubeStateMetricsLabelName currently only enforces
length; add CEL validation and tighten the max length to enforce valid
Kubernetes label keys or the wildcard "*". Update the type
KubeStateMetricsLabelName to change the MaxLength from 256 to 253 and add a
+kubebuilder:validation:XValidation CEL rule that allows either self == "*" or a
DNS1123 subdomain prefix + "/" + qualified name (or no prefix) using split('/')
and format.dns1123Subdomain.validate for the prefix and a regex for the name
(^[A-Za-z0-9]([-A-Za-z0-9_.]*[A-Za-z0-9])?$), with an appropriate message like
"must be a valid Kubernetes label key or *"; keep the existing MinLength and
ensure the new validation applies to KubeStateMetricsLabelName so Labels
[]KubeStateMetricsLabelName inherits it.

---

Nitpick comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 2231-2506: The test suite lacks negative cases for
kubeStateMetricsConfig.nodeSelector and kubeStateMetricsConfig.tolerations; add
reject tests that mirror existing patterns: (1) a case where
kubeStateMetricsConfig.nodeSelector is {} and assert an expectedError about
requiring at least one property; (2) a case where
kubeStateMetricsConfig.nodeSelector has an empty key or invalid value (if schema
enforces non-empty strings); (3) a case where kubeStateMetricsConfig.tolerations
is [] and assert an expectedError about requiring at least one item; and (4) a
case with too many tolerations (or duplicate tolerations) to assert the "Too
many" or "Duplicate value" errors; reference the kubeStateMetricsConfig,
nodeSelector, and tolerations keys when adding these new test entries consistent
with the surrounding test format.

In `@config/v1alpha1/types_cluster_monitoring.go`:
- Around line 2502-2568: KubeStateMetricsConfig duplicates the
scheduling/resources fields from OpenShiftStateMetricsConfig; extract a shared
struct (e.g., PodSchedulingAndResources) containing NodeSelector, Resources,
Tolerations, and TopologySpreadConstraints and replace those fields in both
KubeStateMetricsConfig and OpenShiftStateMetricsConfig with a single
embedded/shared field of that new struct; update kubebuilder tags and JSON names
on the new struct to preserve validation (Min/MaxItems, listType/listMapKey,
optional, etc.) and adjust any references to the original fields in
code/comments to use the new struct.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: d4a84268-dde5-4503-a8ce-f4b5c2c51857

📥 Commits

Reviewing files that changed from the base of the PR and between d134762 and d1719c1.

⛔ Files ignored due to path filters (6)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
  • openapi/openapi.json is excluded by !openapi/**
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

Comment thread config/v1alpha1/types_cluster_monitoring.go
Comment thread config/v1alpha1/types_cluster_monitoring.go Outdated
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from d1719c1 to 57a8c79 Compare May 6, 2026 11:40
@openshift-ci openshift-ci Bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 6, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
config/v1alpha1/types_cluster_monitoring.go (1)

2623-2634: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Use Kubernetes label-key validation here instead of a broad regex.

The current rule still accepts invalid label keys like team/a/b or prefixes that are not valid DNS subdomains, so the CRD can persist additionalResourceLabels values that kube-state-metrics should reject downstream. Please validate the optional prefix and name segment separately (single /, DNS1123 prefix, qualified-name suffix) rather than allowing any mix of ._/-.

For Kubernetes CRD / kubebuilder validation, what CEL rule correctly validates a Kubernetes label key (optional DNS1123 prefix + "/" + qualified name) while also allowing the wildcard "*"?
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@config/v1alpha1/types_cluster_monitoring.go` around lines 2623 - 2634,
Replace the broad XValidation on type KubeStateMetricsLabelName with a CEL rule
that allows "*" or enforces an optional single "/" prefix validated as a DNS1123
subdomain and a name segment validated as a Kubernetes qualified name; for
example use a rule like: self == '*' || (self.split('/').size() == 1 &&
self.matches('^[A-Za-z0-9]([A-Za-z0-9_.-]*[A-Za-z0-9])?$')) ||
(self.split('/').size() == 2 &&
self.split('/')[0].matches('^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$')
&& self.split('/')[1].matches('^[A-Za-z0-9]([A-Za-z0-9_.-]*[A-Za-z0-9])?$'));
update the +kubebuilder:validation:XValidation line on the
KubeStateMetricsLabelName type accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 933-949: The current x-kubernetes-validations regex for
KubeStateMetricsLabelName (used by additionalResourceLabels.labels) allows
multiple slashes and mixed-case DNS prefixes; replace it with a stricter
validation that enforces at most one optional DNS subdomain prefix + '/' and a
single name segment that matches Kubernetes label-key syntax (DNS subdomain must
be lower-case DNS-1123 subdomain, name segment max 63 chars, begins/ends with
alphanumeric, and only contains alphanumerics, -, _, . in between). Update the
validation rule on the x-kubernetes-validations entry for
KubeStateMetricsLabelName so it rejects strings like "a/b/c" or
"Example.COM/key" by requiring a single optional lowercase DNS prefix and a
single valid name segment.

---

Duplicate comments:
In `@config/v1alpha1/types_cluster_monitoring.go`:
- Around line 2623-2634: Replace the broad XValidation on type
KubeStateMetricsLabelName with a CEL rule that allows "*" or enforces an
optional single "/" prefix validated as a DNS1123 subdomain and a name segment
validated as a Kubernetes qualified name; for example use a rule like: self ==
'*' || (self.split('/').size() == 1 &&
self.matches('^[A-Za-z0-9]([A-Za-z0-9_.-]*[A-Za-z0-9])?$')) ||
(self.split('/').size() == 2 &&
self.split('/')[0].matches('^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$')
&& self.split('/')[1].matches('^[A-Za-z0-9]([A-Za-z0-9_.-]*[A-Za-z0-9])?$'));
update the +kubebuilder:validation:XValidation line on the
KubeStateMetricsLabelName type accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 498ad17f-d61b-43f6-aa2f-a3df6fb77a07

📥 Commits

Reviewing files that changed from the base of the PR and between d1719c1 and 57a8c79.

⛔ Files ignored due to path filters (6)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
  • openapi/openapi.json is excluded by !openapi/**
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

Comment on lines +933 to +949
description: |-
KubeStateMetricsLabelName is the name of a Kubernetes label to expose as a metric
via kube-state-metrics. Use "*" to expose all labels for a resource.
Must be either the wildcard "*" or a valid Kubernetes label key.
A valid label key has an optional DNS subdomain prefix followed by a "/" and a name segment,
or just a name segment without a prefix. The name segment must be 63 characters or fewer,
beginning and ending with an alphanumeric character, with dashes, underscores, dots, and
alphanumerics in between.
Must be at least 1 character and at most 253 characters in length.
maxLength: 253
minLength: 1
type: string
x-kubernetes-validations:
- message: must be a valid Kubernetes label key or the
wildcard '*'
rule: self == '*' || self.matches('^[a-zA-Z0-9][a-zA-Z0-9_./-]*[a-zA-Z0-9]$')
|| self.matches('^[a-zA-Z0-9]$')
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Tighten labels validation to actual label-key syntax.

The rule on Lines 946-949 accepts invalid values such as a/b/c and Example.COM/key. That means additionalResourceLabels.labels can pass admission even though the field is documented as * or a valid Kubernetes label key.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`
around lines 933 - 949, The current x-kubernetes-validations regex for
KubeStateMetricsLabelName (used by additionalResourceLabels.labels) allows
multiple slashes and mixed-case DNS prefixes; replace it with a stricter
validation that enforces at most one optional DNS subdomain prefix + '/' and a
single name segment that matches Kubernetes label-key syntax (DNS subdomain must
be lower-case DNS-1123 subdomain, name segment max 63 chars, begins/ends with
alphanumeric, and only contains alphanumerics, -, _, . in between). Update the
validation rule on the x-kubernetes-validations entry for
KubeStateMetricsLabelName so it rejects strings like "a/b/c" or
"Example.COM/key" by requiring a single optional lowercase DNS prefix and a
single valid name segment.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 6, 2026

@danielmellado: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Comment thread config/v1alpha1/types_cluster_monitoring.go Outdated
Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.

Signed-off-by: Daniel Mellado <dmellado@fedoraproject.org>
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from 57a8c79 to c8cfcbd Compare May 6, 2026 14:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants