Add queue-state count regression benchmark#1211
Add queue-state count regression benchmark#1211bgentry wants to merge 2 commits intobrandur-queue-state-countfrom
Conversation
Here, add a new hook called `HookQueueStateCount` which gets invoked to produce job queue count statistics. We do this by adding a new maintenance service which like other maintenance services, runs only on the leader, so we only have one client performing counts at any given time. Furthermore, in order to not introduce a potential operational problem without opt-in from River users, the counts only run if a `HookQueueStateCount` hook/middleware has been added to the client. The reason we do all this to to implement a feature requested by one of users: for `otelriver` in contrib to be able to emit queue count metrics, which seems like a pretty reasonable ask for the package to be able to do, and something that every River user would likely want access to in their ops charts. A slight oddity, but which I think is _probably_ okay, is that the new hook ideally stays a hook, but the existing `otelriver` middleware is a middleware. It'd be nice not to have to put `otelriver.Middleware` into both a client's `Hooks` and `Middleware` configuration, so we modify client to allow for hooks that middleware and middleware which are hooks. This lets `otelriver.Middleware` continue doing what it was already doing, but also to start producing new counts as a hook.
847afa7 to
f1bd30e
Compare
This adds a benchmark on top of #1203 to make the queue-state count query regression easy to reproduce and discuss. It compares the current `JobCountByQueueAndState` implementation against the legacy query shape on the same migrated `river_job` schema. The benchmark stays lightweight by default, but can be scaled locally with `RIVER_BENCH_QUEUE_STATE_COUNT_NUM_JOBS` to reproduce the planner regression with a couple hundred thousand rows and quantify the gap.
f1bd30e to
cda1dbf
Compare
|
@bgentry Thanks for putting these benches in, but I think you're cheating a bit — counting a couple states along an index isn't doing the same work as counting all jobs by state in the system. Your LLM comment here [1] suggests that a query that adds a subquery for every single possible state where you count along the index for N states would be more performant than just iterating through the table once and counting all states. I find that really hard to believe. Could you modify the benchmark to show that this is true? [1] #1203 (comment) |
|
@brandur sorry it took me a couple days to follow back up on this—obviously a busy week 😅 I'm a bit confused what you're asking, the benchmark results above compare the actual performance of This is the actual query the UI uses today for the queue list page. |
40388a2 to
0806d84
Compare
Stacked on #1203.
This adds a small pgx benchmark around
JobCountByQueueAndStateso we have a cheap default signal in normal runs, but can still scale it up locally withRIVER_BENCH_QUEUE_STATE_COUNT_NUM_JOBSwhen we want enough rows to make the planner choice obvious. The benchmark seeds a migratedriver_jobtable in Go using batchedJobInsertFullManycalls, and keeps the states evenly distributed across queues so every queue exercises every state.On my machine (PostgreSQL 16.13, 200k rows, 100 queues, all 8 states distributed across every queue), the current query in #1203 regressed sharply against the previous query shape when I temporarily restored the old implementation locally and reran the same benchmark harness.
EXPLAIN ANALYZE, 2 queuesEXPLAIN ANALYZE, 10 queuesEXPLAIN (ANALYZE, BUFFERS)shows the reason. The new query filters onqueueand groups by(queue, state), so PostgreSQL no longer has a left-edge match on the existing(state, queue, priority, scheduled_at, id)index and falls back to a parallel sequential scan. The old query shape keeps separatestate = 'available'andstate = 'running'branches, so PostgreSQL can still useriver_job_prioritized_fetching_indexvia index-only scans.The committed benchmark only measures the current query. The old-query numbers above came from temporarily restoring the previous implementation locally, collecting the benchmark and plan data, and then restoring the branch state. This PR is only here to make the regression easy to reproduce and discuss before changing the SQL or adding another index.
Full
EXPLAIN (ANALYZE, BUFFERS)output for 2 queuesCurrent query:
Old query:
Full
EXPLAIN (ANALYZE, BUFFERS)output for 10 queuesCurrent query:
Old query: