Conversation
|
✅ No conflicts with other open PRs targeting |
72a9f6c to
a9749e1
Compare
a9749e1 to
cc0765d
Compare
aca57a2 to
9e98a2f
Compare
There was a problem hiding this comment.
Pull request overview
Risk Rating: MEDIUM
This PR introduces deterministic package-level sharding for core Go test jobs in CI to reduce flakiness and improve overall workflow throughput by splitting large test suites across multiple parallel jobs.
Changes:
- Added a small Go helper (
tools/ci-testshard) to deterministically assign packages to shards (FNV-1a hash modulo shard count) and to verify shard coverage. - Updated
go_core_tests,go_core_tests_integration, andgo_core_deployment_testsrunners to compute their package universe and run only the shard-assigned subset. - Refactored
ci-core.ymlto generate a shard matrix, add a shard-consistency check, aggregate shard results into suite-level “result” jobs, and update artifact naming / SonarQube path collection accordingly.
Areas for scrupulous human review
tools/bin/go_core_testsandtools/bin/go_core_deployment_tests: ensure the new “package universe” matches the previous CI intent (including whether compile-only/no-test packages must still be exercised)..github/workflows/ci-core.yml: validate job dependency graph and result aggregation logic (matrix generation,gh apijob selection by name prefix, and gating behavior inscan).
Reviewed changes
Copilot reviewed 3 out of 6 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| tools/ci-testshard/main.go | New CLI utility for deterministic package sharding and shard coverage verification. |
| tools/ci-testshard/main_test.go | Unit tests covering package input parsing and sharding/verification behavior. |
| tools/bin/go_core_tests_integration | Computes integration package list, shards it, and runs only shard-assigned packages. |
| tools/bin/go_core_tests | Computes test package list, shards it, and runs only shard-assigned packages. |
| tools/bin/go_core_deployment_tests | Computes deployment test package list, shards it, and runs only shard-assigned packages (within deployment module). |
| .github/workflows/ci-core.yml | Adds shard matrix generation + consistency check, fans out sharded jobs, aggregates results, and updates artifact/report handling. |
|
If it solves resource contention and decreases flakiness, let's merge. - display_name: "Test Go Core Common Shard #1 Tests"
testcmd: "go test -v ./pkg_1 ./pkg_2"
runner: "ubuntu-latest" # can be more beefy if it improves overall performance
- display_name: "Test Go Core Common Shard #2 Tests"
testcmd: "go test -v ./pkg_3 ./pkg_4"
runner: "ubuntu-latest"
- display_name: "Test Go Core Deployments Shard #1 Tests"
testcmd: "go test -v ./pkg_5 ./pkg_6"
runner: "ubuntu-latest"
- display_name: "Flaky Shard #1 Common Tests"
testcmd: "go test -v ./pkg_7"
runner: "ubuntu-latest"Example of a matrix job that's easy to read in my opinion. I do understand we added Python and bash changes because the current pipeline is already overcomplicated, and it's good that we improved stats, but the long-term goal should be reducing the "go core" pipeline to a simple and very explicit matrix + adding automation in PRs to notify users that a new package is not yet included in CI. To me, that's the simplest approach: easy to shard packages explicitly, easy to rebalance, easy to move between shards without reading any additional code. If that's not enough for some cases, we can also have a dynamic function which returns packages on some criteria, as @Tofel already did, |
|
@skudasov explicit sharding is deffo the simplest, but I didn't go that route, because it requires maintenance: new/modified package names have to be explicitly added, which from my POV increases the risk that some won't be executed if they are not manually added. That's why I went the way I did, since no maintenance is required at all. The trade off is higher complexity and lack of control of the bucket to which the test goes. |
| gh api \ | ||
| --paginate \ | ||
| --jq '.jobs[] | {name, conclusion} | @json' \ | ||
| "/repos/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}/jobs?per_page=100" \ | ||
| > jobs.jsonl |
There was a problem hiding this comment.
Is this the canonical way to get this information? It seems really odd to reach back outside and query yourself, and I would worry about whether it is actually atomically updated, or if it would be racing.
There was a problem hiding this comment.
I went that way so that we could keep a single job to execute all matrix jobs. Otherwise I'd need to either copy it or move it a composite action and I wanted to keep changes minimal.
That meant that I needed to find a way to aggregate sharded jobs results to satisfy branch protection rules. I couldn't use needs.core-tests.result, because it only gives the aggregate result of the whole matrix job, not per child. GitHub Actions does not expose matrix child conclusions individually through needs, expressions, or normal job outputs.
Without using GH API I'd need to come up with custom way of sharing results (e.g. via artifacts) that would make the whole thing even more complex, because I'd need to handle Trunk's rewriting of test results. So I went the API way which returns final result of each job.
What makes you think that races are possible if this job runs only after tests job has finished?
There was a problem hiding this comment.
What makes you think that races are possible if this job runs only after tests job has finished?
It would not naturally happen with many distributed systems since global consistency and atomicity are costly and don't scale, so I think depending on it is speculative unless github has documented that this is OK.
|
What is the overall before vs. after here? |
| "run-timeout-minutes": "15", | ||
| "touch-web-assets": "true", | ||
| "restore-build-cache-only": "true", | ||
| "increase-timeouts-on-schedule": "true", |
There was a problem hiding this comment.
Why is there an additional qualifier for timeout increases?
There was a problem hiding this comment.
That's how it worked before, we increase the default timeouts for fuzz/race tests and pass them as env vars to bash scripts that run them.
There was a problem hiding this comment.
There was not an extra qualifier before. We simply checked if we had a scheduled run, then increased timeouts.
| "touch-web-assets": "true", | ||
| "restore-build-cache-only": "true", | ||
| "increase-timeouts-on-schedule": "true", | ||
| "print-races": "true", |
There was a problem hiding this comment.
Why do we need an extra variable to ensure that races are printed?
There was a problem hiding this comment.
that's how it already worked: https://github.com/smartcontractkit/chainlink/blob/develop/.github/workflows/ci-core.yml#L419C1-L435C47
There was a problem hiding this comment.
But that code that you linked does not include an extra parameter to indicate if races should be printed. It checks if we are running races via matrix.type.cmd == 'go_core_race_tests', and if so it prints them. If we need a different means of checking whether we are in race mode now, then so be it, but I don't think we should add extra params for each action that is implied.
| "print-races": "true", | |
| "race": "true", |
| "logs-artifact-name": f"go_core_tests_integration_logs_shard_{i}", | ||
| }) | ||
|
|
||
| def add_core_deployment_rows(): |
There was a problem hiding this comment.
Why don't we need touch-web-assets for this one? Isn't that necessary to compile the core binary?
There was a problem hiding this comment.
Because these tests run in deployments folder, which is a separate Go module that doesn't compile core/web package.
There was a problem hiding this comment.
What is the rationale for including tests from that module here? This is otherwise a consistent set of variations on the core tests.
@jmank88 Roughly speaking it is the same, around 17 minutes, because 3 packages mentioned in the PR are resulting in a long tail. If we do go the sharding way then I will go after teams that own them and request they speed them up so that we can reap the benefits of sharding. Until then the only benefit will be less flakyness. |
Huh, I only expected speedup. Why would there be less flakyness? I don't think it is true that:
Too many packages is not inherently a problem because concurrency is already limited. There has to be more to it than that. Is the database contention the problem? |
Hard to get a speed up if we shard by package level if have packages that run for 11 minutes. If these are sped up them we will see a big speed up.
You mean limited by
I think it is one of the problems. Since now every job will start its own DB, issues that were related to DB contention should also have lower impact. |
b0aae99 to
b1e1a14
Compare
Indirectly, yes. Both package and test level concurrency.
IMHO it is important to be certain about this. If avoiding DB contention is the main difference, then there are simpler approaches that we can try, and some of those will benefit local development as well rather than addressing only CI. |
| "pkg/c", | ||
| "pkg/d", | ||
| "pkg/e", | ||
| "pkg/f", |
There was a problem hiding this comment.
not seeing a duplicate case
|





Approach
This PR shards go_core_tests, go_core_tests_integration, and go_core_deployment_tests in CI to reduce flakiness caused by running too many test packages in parallel in a single job.
The sharding logic is intentionally simple and deterministic. Each existing test runner first computes the exact package list it already intends to run today, then passes that package list to a small Go helper. The helper assigns each package to a shard by hashing the package path with FNV-1a and taking hash % shard_count. This gives a stable, zero-maintenance partitioning scheme: no manual package lists, no balancing tables, and new test packages are picked up automatically.
Testing
Correctness is guarded in two ways. First, a consistency check verifies that for a given package universe, the union of all shards covers every package exactly once. Second, I compared the executed test sets between unsharded and sharded runs by extracting package + test name from gotestsum --jsonfile output and diffing the sorted results. For go_core_tests, go_core_tests_integration, and go_core_deployment_tests, the executed test sets matched exactly between the non-sharded and sharded runs.
Results:
go_core_tests✅go_core_deployment_tests✅go_core_integration_tests✅Rejected approaches
Sharding based on regex/test names
We considered naive sharding by test-name regex, but rejected it because Go executes tests at package granularity and the suite is not evenly distributed by test-name prefix. In practice, leading-letter buckets were highly imbalanced, and the dominant runtime/flakiness hotspots are concentrated in a small number of heavy packages. Package-path hashing gives a simpler, future-proof, zero-maintenance partitioning mechanism that aligns with go test execution semantics.
Follow up work
Speeding up outlier packages
In order to fully unlock workflow execution speed up some of the deployments tests will need to made faster by respective teams. Once that is done plausible p50 of
CI Coremight land in the zone of 10-11 minutes.Current execution swim line is shown below:

Slowest packages that need speeding up:
Sharding fuzz tests
As can be seen on the graph above the next far outlier are fuzz tests. Once we shard them (2 shards should be enough) we should be able to cut down workflow execution time even further.