Conversation
WalkthroughThis PR introduces Oracle module v2 with timestamp-based price storage replacing block-height indexing, establishes a new v2.1.0 upgrade with deployment and oracle migrations, removes the deprecated v2.0.0 upgrade, refactors BME ledger sequencing to use transient storage, updates the Pyth contract to v1.0.1, and modernizes build/release infrastructure while aligning test utilities and configuration examples. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Oracle Keeper
participant Transient Store
participant Price Storage
participant Epoch Hooks
participant Price Pruning
Client->>Oracle Keeper: AddPriceEntry(price, timestamp)
Oracle Keeper->>Transient Store: Increment sequence for DataID
Oracle Keeper->>Price Storage: Store price with timestamp + sequence
Oracle Keeper->>Oracle Keeper: Update latest price ID (timestamp, sequence)
Note over Client,Price Pruning: Price Aggregation
Client->>Oracle Keeper: Query aggregated prices
Oracle Keeper->>Price Storage: getTWAPHistory(timestamp range)
Oracle Keeper->>Oracle Keeper: calculateAggregatedPricesFromHistory
Oracle Keeper-->>Client: Return TWAP + health
Note over Client,Price Pruning: Epoch-Driven Pruning
Epoch Hooks->>Oracle Keeper: AfterEpochEnd (when PruneEpoch matches)
Oracle Keeper->>Oracle Keeper: Compute cutoff time = blockTime - PriceRetention
Oracle Keeper->>Price Storage: prunePrices(cutoff, maxPerEpoch)
Price Storage-->>Oracle Keeper: Delete old records, return count
Oracle Keeper->>Epoch Hooks: Log pruning results
sequenceDiagram
participant Pyth Contract
participant Contract Storage
participant Oracle Message
participant Keeper
participant Cosmos Bank
Pyth Contract->>Contract Storage: Receive update with new publish_time
Contract Storage->>Contract Storage: Load stored PRICE_FEED
alt publish_time < stored publish_time
Contract Storage->>Contract Storage: Reject (stale)
else publish_time >= stored publish_time
Contract Storage->>Contract Storage: Update PRICE_FEED
Contract Storage->>Oracle Message: Build MsgAddPriceEntry (v2)
Oracle Message->>Keeper: AddPriceEntry(price, timestamp)
Keeper->>Cosmos Bank: Execute price update
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
_run/node/price-feeder.sh (1)
80-86:⚠️ Potential issue | 🟠 MajorDenom change inconsistent with threshold and log messages.
The balance query now filters for
aktinstead ofuakt, but:
- Line 82 still uses threshold
100000(appropriate for micro-units, not whole tokens)- Lines 83 and 85 still log with
uaktsuffixIf the chain now uses
aktas the base denom (unlikely for Cosmos SDK chains where micro-units are standard), the threshold needs adjustment. If this is an error, revert touakt.🐛 Proposed fix (assuming uakt is correct)
- balance=$(akash query bank balances "$address" -o json 2>/dev/null | jq -r '.balances[] | select(.denom=="akt") | .amount // "0"') + balance=$(akash query bank balances "$address" -o json 2>/dev/null | jq -r '.balances[] | select(.denom=="uakt") | .amount // "0"')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/node/price-feeder.sh` around lines 80 - 86, The denom used in the jq filter is inconsistent with the threshold and log suffix: update the bank query so the balance variable matches micro-denom usage (change the jq select(.denom=="akt") back to select(.denom=="uakt")), leaving the numeric threshold (100000) and the log calls that append "uakt" unchanged; ensure the command that sets the balance variable (the akash query bank balances ... | jq ...) and the subsequent log "WARN"/"INFO" calls consistently reference the same denom._docs/pyth-integration.md (1)
696-738:⚠️ Potential issue | 🟠 MajorUse protobuf duration strings in the v2 proposal example and table.
The repo’s v2 payloads now use duration strings (
"60s","30s", etc.), but this guide still shows raw integers and block-based wording here. Copying this example as-is will give operators stale or invalid guidance.Suggested doc update
- "max_price_staleness_period": 60, - "twap_window": 180, + "max_price_staleness_period": "60s", + "twap_window": "<duration>", -| `max_price_staleness_period` | i64 | Max age in seconds (~6s/block) | `60` | -| `twap_window` | i64 | TWAP calculation window (blocks) | `180` (~18 min) | +| `max_price_staleness_period` | duration | Max age before a price is stale | `"60s"` | +| `twap_window` | duration | TWAP calculation window | `"<duration>"` |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` around lines 696 - 738, The example and table use integer seconds/blocks for MsgUpdateParams fields max_price_staleness_period and twap_window but v2 expects protobuf duration strings; update the JSON snippet and the Oracle Parameters table to use duration strings (e.g. "max_price_staleness_period": "60s", "twap_window": "180s" or other human-readable durations) and remove block-based wording, ensuring all references to these fields in the example and table reflect the duration-string format required by the v2 payloads.contracts/pyth/src/oracle.rs (1)
58-72:⚠️ Potential issue | 🟡 MinorAdd validation for
timestamp_nanosin the constructor.
google.protobuf.Timestamp.nanosmust be in the range0..999_999_999. The constructor accepts anyi32value without validation, so callers can construct a message that encodes successfully locally but fails when submitted to the chain. Validate the range before assigning the field.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/oracle.rs` around lines 58 - 72, The constructor new currently accepts any i32 for timestamp_nanos; add a range check to ensure timestamp_nanos is between 0 and 999_999_999 (inclusive) before assigning it to the struct. Modify new (the constructor for the oracle struct) to validate timestamp_nanos and return an Err result (or otherwise propagate a clear error) if the value is out of range, rather than allowing invalid values to be set, so callers cannot construct messages that will fail on-chain; keep the DataID::new call and other fields unchanged.x/oracle/keeper/keeper.go (1)
568-622:⚠️ Potential issue | 🟠 MajorBase aggregate metadata on the sources that actually produced a TWAP.
This loop skips sources whose
calculateTWAPfails, butMedianPrice,MinPrice,MaxPrice, andNumSourcesare still derived fromlatestPrices. That can overstate source health and mix failed sources back into the aggregate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/keeper.go` around lines 568 - 622, The aggregation currently computes MedianPrice, MinPrice, MaxPrice, and NumSources from latestPrices even when some sources failed TWAP; change the code to only include sources that successfully produced a TWAP: when iterating over latestPrices and calling calculateTWAP(record) append both the twap and the corresponding source record (e.g., successfulSources) so later use successfulSources (not latestPrices) as input to calculateMedian, the min/max loop, and to set aggregated.NumSources; keep aggregateTWAP computed from twaps as before and use calculateDeviationBps on the min/max derived from successfulSources, and ensure Timestamp and other fields remain set as before.
🧹 Nitpick comments (4)
x/oracle/keeper/codec.go (1)
373-377: Fix misleading comment.The comment says "big-endian uint64" but the code correctly uses
PutUint32for theSourcefield. This inconsistency could confuse future maintainers.📝 Proposed fix
func (d priceDataRecordIDLegacyCodec) Encode(buffer []byte, key types.PriceDataRecordID) (int, error) { offset := 0 - // Write source id as big-endian uint64 (for proper ordering) + // Write source id as big-endian uint32 (for proper ordering) binary.BigEndian.PutUint32(buffer, key.Source) offset += 4🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/codec.go` around lines 373 - 377, The comment in priceDataRecordIDLegacyCodec.Encode incorrectly states "big-endian uint64" while the code uses binary.BigEndian.PutUint32 to write key.Source; update the comment to accurately describe that Source is written as a big-endian uint32 (or remove the incorrect size reference) so it matches the implementation (see function priceDataRecordIDLegacyCodec.Encode and the PutUint32 call writing key.Source).upgrades/software/v2.1.0/upgrade.go (1)
109-123: Potential unintended error return.At line 122,
return toVM, errreturns theerrvariable, which at this point holds the result fromup.Keepers.Akash.Bme.SetParams. However, ifMaxPendingAttempts != 0(theifblock is skipped),errstill holds the value frombparamsfetch at line 109. If that succeeded, this is fine, but the logic is subtle.Consider explicitly returning
nilfor clarity:📝 Proposed fix for clarity
if bparams.MaxPendingAttempts == 0 { bparams.MaxPendingAttempts = 3 err = up.Keepers.Akash.Bme.SetParams(sctx, bparams) if err != nil { return toVM, fmt.Errorf("failed to set bme params: %w", err) } } - return toVM, err + return toVM, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.1.0/upgrade.go` around lines 109 - 123, The final return currently returns the variable err which may hold a stale value from up.Keepers.Akash.Bme.GetParams or SetParams; change the final return in the function (after the MaxPendingAttempts handling) to explicitly return nil for the error (i.e., return toVM, nil) so callers don't receive an unintended error when no error actually occurred; update the return at the end of the block that follows the bparams logic where bparams, err := up.Keepers.Akash.Bme.GetParams(sctx) and the subsequent SetParams call are handled.x/oracle/keeper/abci.go (1)
34-38: Consider returning error instead of silently continuing.When
GetParamsfails, the function logs and returnsnil, effectively skipping all price aggregation. This could silently mask misconfiguration or storage issues. Consider whether failing loudly (returning the error) would be more appropriate for a consensus-critical operation.♻️ Alternative: return the error
params, err := k.GetParams(sctx) if err != nil { sctx.Logger().Error("failed to get oracle params", "error", err) - return nil + return err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/abci.go` around lines 34 - 38, The handler currently swallows GetParams errors by logging and returning nil; instead propagate the failure so consensus can react: replace the current log-and-return-nil behavior after params, err := k.GetParams(sctx) with returning the error (or a wrapped error) to the caller (e.g., return fmt.Errorf("GetParams: %w", err) or just return err) so callers of this function can fail loudly; ensure you update the function signature/return path if necessary to propagate the error from this function rather than continuing silently (refer to GetParams and sctx.Logger().Error in abci.go).x/oracle/keeper/timestamp.go (1)
87-93: Consider adding input validation inreadDigits.The function assumes all bytes in the buffer are ASCII digits ('0'-'9'). If a malformed buffer is passed (e.g., containing non-digit characters), the result will be silently incorrect rather than returning an error. Since
DecodeTimestampis used for parsing stored keys, malformed data would propagate as invalid timestamps.♻️ Proposed validation
func readDigits(buf []byte) int { n := 0 for _, b := range buf { + if b < '0' || b > '9' { + return -1 // or consider returning (int, error) + } n = n*10 + int(b-'0') } return n }Then validate the return value in
DecodeTimestamp.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/timestamp.go` around lines 87 - 93, readDigits currently assumes every byte is '0'–'9' and returns a silently wrong value on malformed input; change readDigits to validate each byte (returning an (int, error) pair or a sentinel error) when a non-digit is encountered and update DecodeTimestamp to call the new readDigits signature and propagate/handle the error (e.g., return a decoding error) so malformed stored keys produce a clear error instead of incorrect timestamps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@_run/node/deployment.yaml`:
- Around line 43-45: The deployment pricing denom is set to the wrong token
string: change the 'denom' value under the 'web' pricing block from "uact" back
to "uakt" so it matches the chain's configured DENOM; locate the 'web' pricing
stanza (keys 'web', 'denom', 'amount') in the deployment YAML and update the
denom literal to "uakt".
In `@_run/node/price-feeder.sh`:
- Around line 13-14: The script references AKASH_NODE (e.g., in the status URL
and log "Node: $AKASH_NODE") but the default assignment is commented out,
causing an unbound variable error under set -euo pipefail; fix by reintroducing
a sensible default assignment for AKASH_NODE in price-feeder.sh (for example a
local RPC like http://localhost:26657 or another project-appropriate default) or
otherwise ensure AKASH_NODE is exported before use, and update the top-of-file
assignment so AKASH_NODE="${AKASH_NODE:=<default>}" is present and used by the
status and log references.
In `@contracts/pyth/src/contract.rs`:
- Around line 257-267: The branch that updates PRICE_FEED currently sets
price_feed.prev_publish_time = price_feed.publish_time even when publish_time ==
price_feed.publish_time, which causes prev_publish_time to lose the last
distinct observation; change the logic in the if block (the publish_time >=
price_feed.publish_time check) so that you still accept equal publish_time
updates for price, conf and expo, but only set price_feed.prev_publish_time =
price_feed.publish_time when publish_time > price_feed.publish_time (strictly
greater); then assign price_feed.price/conf/expo/publish_time and call
PRICE_FEED.save(deps.storage, &price_feed)? as before.
In `@go.mod`:
- Line 3: The Docker test image is unpinned (uses golang:alpine) while go.mod
specifies Go 1.25.9; update _build/test.Dockerfile to use the matching image tag
(e.g., golang:1.25.9-alpine) so the container Go runtime matches go.mod's "go
1.25.9"; ensure any related build or FROM lines referencing golang:alpine are
replaced with the explicit version to avoid mismatches with the go.mod
declaration.
In `@make/releasing.mk`:
- Around line 55-88: The GORELEASER_SKIP value is built earlier but never
forwarded into GORELEASER_ARGS, so make invocations with GORELEASER_SKIP become
no-ops; fix this by appending GORELEASER_SKIP into the argument list before use
(e.g. add a line like GORELEASER_ARGS += $(GORELEASER_SKIP) above or inside the
goreleaser target) so the goreleaser target uses GORELEASER_ARGS (and optionally
also export/pass GORELEASER_SKIP to the docker environment if you need it inside
the container).
In `@script/upgrades.sh`:
- Around line 599-601: The unconditional copy cp -r "$srcval/wasm" ./ can cause
the script to exit under set -e if that directory is missing; change it to check
for the wasm directory before copying (e.g., test -d "$srcval/wasm" or [ -e
"$srcval/wasm" ] and only run the cp when present) so the rm -rf "wasm", cp -r
"$srcval/data" ./, and subsequent operations won’t hard fail when $srcval/wasm
is absent.
In `@script/wasm2go.sh`:
- Line 12: Update the script header usage comment to reflect the new target path
for the generated Go file (v2.1.0) so it matches the OUTPUT_FILE value; locate
the OUTPUT_FILE="upgrades/software/v2.1.0/contracts.go" assignment and update
the earlier usage/comment text that still references v2.0.0/contracts.go (and
the similar comment around the other occurrence at the block near line 40) to
say v2.1.0/contracts.go.
In `@testutil/oracle/price_feeder.go`:
- Around line 45-49: The params setup uses raw integers for
MaxPriceStalenessPeriod and TwapWindow which are interpreted as nanoseconds;
update the assignments in the params returned by oraclev1.DefaultParams() (the
params variable where you set params.Sources, params.MinPriceSources) to use
time.Duration literals, e.g. set params.MaxPriceStalenessPeriod and
params.TwapWindow by multiplying the intended seconds by time.Second (e.g. 1000
* time.Second and 10 * time.Second or the correct intended durations) so
downstream uses of these fields (e.g., in keeper logic that calls
now.Add(-params.MaxPriceStalenessPeriod)) operate with the correct time units.
In `@upgrades/CHANGELOG.md`:
- Around line 43-49: The markdown has a heading level mismatch around the
"v2.1.0" and "Description" headings; make the levels consistent (no jumps) by
adjusting the heading markers for either the "v2.1.0" heading or the
"Description" heading so they match the surrounding sections (e.g., use the same
number of # for "v2.1.0" and "Description" or increment/decrement by only one
level) to satisfy markdownlint MD001.
In `@upgrades/software/v2.1.0/oracle.go`:
- Around line 29-42: The current code in oracle.go materializes all keys into
keys before deleting, which can OOM; change to a batched delete loop: iterate
with store.Iterator(nil, nil) but collect at most a fixed batchSize (e.g.
500-1000) into a temporary slice (batchKeys) using bytes.Clone(iter.Key()), then
close the iterator (check iter.Close() error), loop over batchKeys calling
store.Delete(key), and repeat by creating a new iterator starting after the last
deleted key (use the last cloned key as the new start bound) until no more keys
remain; ensure you still check and propagate iter.Close() errors and remove the
single monolithic keys slice usage.
In `@x/epochs/keeper/abci.go`:
- Line 14: The telemetry call in BeginBlocker is using the wrong metric key:
change the deferred call telemetry.ModuleMeasureSince(types.ModuleName, start,
telemetry.MetricKeyEndBlocker) (in the BeginBlocker function) to use
telemetry.MetricKeyBeginBlocker so the duration is recorded under the
BeginBlocker metric; leave types.ModuleName and start unchanged.
In `@x/escrow/keeper/grpc_query.go`:
- Around line 79-143: The pagination handling only supports Key and Limit but
silently ignores Offset, Reverse and CountTotal; add explicit validation at the
start of the query handler(s) (the accounts loop using req.Pagination and the
analogous Payments handler) to reject unsupported modes: if req.Pagination is
non-nil and (req.Pagination.Offset != 0 || req.Pagination.Reverse ||
req.Pagination.CountTotal) return a gRPC status.Error(codes.InvalidArgument,
"<brief message>") (or similar) so callers receive an error instead of incorrect
results; place this check before building searchPrefix/iterators (before using
req.Pagination.Limit in the loop) and reference the req.Pagination symbol in
both the Accounts and Payments handlers when implementing the validation.
In `@x/oracle/keeper/keeper.go`:
- Around line 197-217: The code hard-codes a 12s freshness floor for the first
sample by setting latestID.Timestamp = ctx.BlockTime().Add(-(time.Second * 12));
replace this with the module's configured staleness window: when
k.loadLatestPriceID returns collections.ErrNotFound treat it as the first record
and compute the floor from the oracle params (e.g. read the staleness window
from k.GetParams(ctx).<StalenessField> or the appropriate params accessor) and
set latestID.Timestamp = ctx.BlockTime().Add(-stalenessWindow) so the check
latestID.Timestamp.After(timestamp) uses the configured policy rather than a
fixed 12s. Ensure you reference and use the existing params accessor on keeper k
(or the correct params field name) rather than introducing a new hard-coded
constant.
- Around line 655-663: The fallback branch in the TWAP computation (in keeper.go
around the block referencing totalWeight and dataPoints) currently averages all
same-timestamp samples, which can reintroduce superseded prices; change the
fallback to return the newest same-timestamp sample instead of averaging — i.e.,
when totalWeight == 0 return the first dataPoints entry's price
(dataPoints[0].State.Price) as the TWAP (ensure dataPoints is non-empty before
accessing it), rather than computing the mean.
In `@x/oracle/keeper/prune.go`:
- Around line 106-109: When deleting keys in the for loop over toDelete, ensure
you refresh the in-memory/latest index for that (source, denom, baseDenom)
prefix if you just removed its newest sample: after k.prices.Remove(ctx, key)
succeeds, check whether the deleted key equals the stored latestPriceID for that
prefix (use the same prefix extraction logic used elsewhere), and if so query
k.prices for the next-most-recent remaining record for that prefix (or clear the
entry) and update latestPriceID accordingly so it never points at a removed
record.
---
Outside diff comments:
In `@_docs/pyth-integration.md`:
- Around line 696-738: The example and table use integer seconds/blocks for
MsgUpdateParams fields max_price_staleness_period and twap_window but v2 expects
protobuf duration strings; update the JSON snippet and the Oracle Parameters
table to use duration strings (e.g. "max_price_staleness_period": "60s",
"twap_window": "180s" or other human-readable durations) and remove block-based
wording, ensuring all references to these fields in the example and table
reflect the duration-string format required by the v2 payloads.
In `@_run/node/price-feeder.sh`:
- Around line 80-86: The denom used in the jq filter is inconsistent with the
threshold and log suffix: update the bank query so the balance variable matches
micro-denom usage (change the jq select(.denom=="akt") back to
select(.denom=="uakt")), leaving the numeric threshold (100000) and the log
calls that append "uakt" unchanged; ensure the command that sets the balance
variable (the akash query bank balances ... | jq ...) and the subsequent log
"WARN"/"INFO" calls consistently reference the same denom.
In `@contracts/pyth/src/oracle.rs`:
- Around line 58-72: The constructor new currently accepts any i32 for
timestamp_nanos; add a range check to ensure timestamp_nanos is between 0 and
999_999_999 (inclusive) before assigning it to the struct. Modify new (the
constructor for the oracle struct) to validate timestamp_nanos and return an Err
result (or otherwise propagate a clear error) if the value is out of range,
rather than allowing invalid values to be set, so callers cannot construct
messages that will fail on-chain; keep the DataID::new call and other fields
unchanged.
In `@x/oracle/keeper/keeper.go`:
- Around line 568-622: The aggregation currently computes MedianPrice, MinPrice,
MaxPrice, and NumSources from latestPrices even when some sources failed TWAP;
change the code to only include sources that successfully produced a TWAP: when
iterating over latestPrices and calling calculateTWAP(record) append both the
twap and the corresponding source record (e.g., successfulSources) so later use
successfulSources (not latestPrices) as input to calculateMedian, the min/max
loop, and to set aggregated.NumSources; keep aggregateTWAP computed from twaps
as before and use calculateDeviationBps on the min/max derived from
successfulSources, and ensure Timestamp and other fields remain set as before.
---
Nitpick comments:
In `@upgrades/software/v2.1.0/upgrade.go`:
- Around line 109-123: The final return currently returns the variable err which
may hold a stale value from up.Keepers.Akash.Bme.GetParams or SetParams; change
the final return in the function (after the MaxPendingAttempts handling) to
explicitly return nil for the error (i.e., return toVM, nil) so callers don't
receive an unintended error when no error actually occurred; update the return
at the end of the block that follows the bparams logic where bparams, err :=
up.Keepers.Akash.Bme.GetParams(sctx) and the subsequent SetParams call are
handled.
In `@x/oracle/keeper/abci.go`:
- Around line 34-38: The handler currently swallows GetParams errors by logging
and returning nil; instead propagate the failure so consensus can react: replace
the current log-and-return-nil behavior after params, err := k.GetParams(sctx)
with returning the error (or a wrapped error) to the caller (e.g., return
fmt.Errorf("GetParams: %w", err) or just return err) so callers of this function
can fail loudly; ensure you update the function signature/return path if
necessary to propagate the error from this function rather than continuing
silently (refer to GetParams and sctx.Logger().Error in abci.go).
In `@x/oracle/keeper/codec.go`:
- Around line 373-377: The comment in priceDataRecordIDLegacyCodec.Encode
incorrectly states "big-endian uint64" while the code uses
binary.BigEndian.PutUint32 to write key.Source; update the comment to accurately
describe that Source is written as a big-endian uint32 (or remove the incorrect
size reference) so it matches the implementation (see function
priceDataRecordIDLegacyCodec.Encode and the PutUint32 call writing key.Source).
In `@x/oracle/keeper/timestamp.go`:
- Around line 87-93: readDigits currently assumes every byte is '0'–'9' and
returns a silently wrong value on malformed input; change readDigits to validate
each byte (returning an (int, error) pair or a sentinel error) when a non-digit
is encountered and update DecodeTimestamp to call the new readDigits signature
and propagate/handle the error (e.g., return a decoding error) so malformed
stored keys produce a clear error instead of incorrect timestamps.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: de5264ca-a624-4d0b-9075-e3632fb21ee6
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockgo.sumis excluded by!**/*.sum
📒 Files selected for processing (72)
.agents/skills/setup-env/SKILL.md.agents/skills/smartcontracts/SKILL.md.agents/skills/vanity/SKILL.md.github/workflows/release.yaml.goreleaser.yamlCargo.toml_docs/pyth-integration.md_run/init.sh_run/node/deployment.yaml_run/node/price-feeder.sh_run/node/prop.jsonapp/ante.goapp/types/app.gocontracts/pyth/Cargo.tomlcontracts/pyth/src/contract.rscontracts/pyth/src/integration_tests.rscontracts/pyth/src/oracle.rsgo.modmake/releasing.mkmeta.jsonscript/upgrades.shscript/wasm2go.shtests/e2e/oracle_cli_test.gotests/e2e/oracle_grpc_test.gotests/e2e/pyth_contract_test.gotests/upgrade/test-cases.jsontestutil/oracle/price_feeder.gotestutil/state/suite.goupgrades/CHANGELOG.mdupgrades/software/v2.0.0/contracts.goupgrades/software/v2.0.0/deployment.goupgrades/software/v2.0.0/init.goupgrades/software/v2.0.0/upgrade.goupgrades/software/v2.0.0/wasm.goupgrades/software/v2.1.0/contracts.goupgrades/software/v2.1.0/deployment.goupgrades/software/v2.1.0/init.goupgrades/software/v2.1.0/oracle.goupgrades/software/v2.1.0/upgrade.goupgrades/upgrades.gox/bme/keeper/abci.gox/bme/keeper/burn_mint_test.gox/bme/keeper/grpc_query.gox/bme/keeper/keeper.gox/bme/keeper/key.gox/bme/simulation/genesis.gox/bme/simulation/proposals.gox/deployment/keeper/abci.gox/deployment/keeper/grpc_query.gox/deployment/keeper/keeper.gox/deployment/module.gox/epochs/keeper/abci.gox/escrow/keeper/grpc_query.gox/market/keeper/grpc_query.gox/market/keeper/keeper.gox/oracle/alias.gox/oracle/handler/server.gox/oracle/keeper/abci.gox/oracle/keeper/codec.gox/oracle/keeper/genesis.gox/oracle/keeper/grpc_query.gox/oracle/keeper/grpc_query_test.gox/oracle/keeper/keeper.gox/oracle/keeper/key.gox/oracle/keeper/prune.gox/oracle/keeper/timestamp.gox/oracle/keeper/timestamp_test.gox/oracle/module.gox/oracle/simulation/genesis.gox/oracle/simulation/proposals.gox/wasm/bindings/akash_query.gox/wasm/keeper/msg_filter.go
💤 Files with no reviewable changes (5)
- x/deployment/keeper/keeper.go
- upgrades/software/v2.0.0/init.go
- upgrades/software/v2.0.0/wasm.go
- upgrades/software/v2.0.0/upgrade.go
- upgrades/software/v2.0.0/deployment.go
| web: | ||
| denom: uact | ||
| amount: 1000 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Check denom usage consistency in YAML manifests
rg -n --iglob '*.yml' --iglob '*.yaml' '\b(uakt|uact)\b' -C2
# 2) Check docs/examples for expected denom references
rg -n '\b(uakt|uact)\b' -C2 README.md docs _run 2>/dev/null || trueRepository: akash-network/node
Length of output: 12005
Change uact back to uakt in deployment pricing denom.
Line 44 sets the pricing denom to uact, but the chain is configured with DENOM=uakt (see _run/node/docker-compose.yaml:74, _run/init.sh:35, and _run/common.mk:36). This mismatch will cause bidding/payment transactions to fail at runtime. Align with the chain's configured token denom.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@_run/node/deployment.yaml` around lines 43 - 45, The deployment pricing denom
is set to the wrong token string: change the 'denom' value under the 'web'
pricing block from "uact" back to "uakt" so it matches the chain's configured
DENOM; locate the 'web' pricing stanza (keys 'web', 'denom', 'amount') in the
deployment YAML and update the denom literal to "uakt".
| AKASH_CHAIN_ID="${AKASH_CHAIN_ID:=local}" | ||
| #AKASH_NODE="${AKASH_NODE:=https://testnetrpc.akashnet.net:443}" |
There was a problem hiding this comment.
Undefined AKASH_NODE will cause script failure.
AKASH_NODE is commented out but still referenced on line 123 (${AKASH_NODE}/status) and line 191 (log "INFO" "Node: $AKASH_NODE"). With set -euo pipefail enabled, the script will fail with an unbound variable error if AKASH_NODE is not provided externally.
Either provide a default value or remove the comment:
🐛 Proposed fix
AKASH_CHAIN_ID="${AKASH_CHAIN_ID:=local}"
-#AKASH_NODE="${AKASH_NODE:=https://testnetrpc.akashnet.net:443}"
+AKASH_NODE="${AKASH_NODE:=http://localhost:26657}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@_run/node/price-feeder.sh` around lines 13 - 14, The script references
AKASH_NODE (e.g., in the status URL and log "Node: $AKASH_NODE") but the default
assignment is commented out, causing an unbound variable error under set -euo
pipefail; fix by reintroducing a sensible default assignment for AKASH_NODE in
price-feeder.sh (for example a local RPC like http://localhost:26657 or another
project-appropriate default) or otherwise ensure AKASH_NODE is exported before
use, and update the top-of-file assignment so
AKASH_NODE="${AKASH_NODE:=<default>}" is present and used by the status and log
references.
| // Update contract storage for same or newer timestamps. | ||
| // Pyth may submit multiple prices within the same timestamp | ||
| // but from different slots — all should be accepted. | ||
| if publish_time >= price_feed.publish_time { | ||
| price_feed.prev_publish_time = price_feed.publish_time; | ||
| price_feed.price = price; | ||
| price_feed.conf = conf; | ||
| price_feed.expo = expo; | ||
| price_feed.publish_time = publish_time; | ||
|
|
||
| PRICE_FEED.save(deps.storage, &price_feed)?; |
There was a problem hiding this comment.
Preserve prev_publish_time on same-timestamp overwrites.
Equal publish_time updates are intentionally accepted, but this branch also advances prev_publish_time. After a same-timestamp overwrite, prev_publish_time becomes equal to publish_time, so the contract loses the last distinct observation time.
Proposed fix
- if publish_time >= price_feed.publish_time {
- price_feed.prev_publish_time = price_feed.publish_time;
+ if publish_time >= price_feed.publish_time {
+ if publish_time > price_feed.publish_time {
+ price_feed.prev_publish_time = price_feed.publish_time;
+ }
price_feed.price = price;
price_feed.conf = conf;
price_feed.expo = expo;
price_feed.publish_time = publish_time;
PRICE_FEED.save(deps.storage, &price_feed)?;
}Based on learnings, same publish_time Pyth attestations intentionally use last-on-chain-wins semantics and may overwrite each other.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Update contract storage for same or newer timestamps. | |
| // Pyth may submit multiple prices within the same timestamp | |
| // but from different slots — all should be accepted. | |
| if publish_time >= price_feed.publish_time { | |
| price_feed.prev_publish_time = price_feed.publish_time; | |
| price_feed.price = price; | |
| price_feed.conf = conf; | |
| price_feed.expo = expo; | |
| price_feed.publish_time = publish_time; | |
| PRICE_FEED.save(deps.storage, &price_feed)?; | |
| // Update contract storage for same or newer timestamps. | |
| // Pyth may submit multiple prices within the same timestamp | |
| // but from different slots — all should be accepted. | |
| if publish_time >= price_feed.publish_time { | |
| if publish_time > price_feed.publish_time { | |
| price_feed.prev_publish_time = price_feed.publish_time; | |
| } | |
| price_feed.price = price; | |
| price_feed.conf = conf; | |
| price_feed.expo = expo; | |
| price_feed.publish_time = publish_time; | |
| PRICE_FEED.save(deps.storage, &price_feed)?; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/pyth/src/contract.rs` around lines 257 - 267, The branch that
updates PRICE_FEED currently sets price_feed.prev_publish_time =
price_feed.publish_time even when publish_time == price_feed.publish_time, which
causes prev_publish_time to lose the last distinct observation; change the logic
in the if block (the publish_time >= price_feed.publish_time check) so that you
still accept equal publish_time updates for price, conf and expo, but only set
price_feed.prev_publish_time = price_feed.publish_time when publish_time >
price_feed.publish_time (strictly greater); then assign
price_feed.price/conf/expo/publish_time and call PRICE_FEED.save(deps.storage,
&price_feed)? as before.
| module pkg.akt.dev/node/v2 | ||
|
|
||
| go 1.25.5 | ||
| go 1.25.9 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify toolchain pinning across CI/release config for Go 1.25.9
set -euo pipefail
echo "== Files that pin Go versions =="
rg -n --iglob '*{yml,yaml,toml,json,Dockerfile,md}' 'go-version|golang:|GO_VERSION|toolchain|setup-go|actions/setup-go|1\.25\.9|1\.25'
echo
echo "== go.mod/go.work directives =="
rg -n '^\s*go\s+[0-9]+\.[0-9]+(\.[0-9]+)?|^\s*toolchain\s+go[0-9]+\.[0-9]+(\.[0-9]+)?' --iglob 'go.mod' --iglob 'go.work'Repository: akash-network/node
Length of output: 407
🏁 Script executed:
find .github/workflows -type f -name "*.yml" -o -name "*.yaml" 2>/dev/null | head -20Repository: akash-network/node
Length of output: 315
🏁 Script executed:
find . -type f \( -name "Dockerfile*" -o -name "docker-compose*" \) | head -20Repository: akash-network/node
Length of output: 130
🏁 Script executed:
rg -l "setup-go|go-version|golang:" . --type yaml --type yml 2>/dev/null | head -20Repository: akash-network/node
Length of output: 44
🏁 Script executed:
cat .github/workflows/*.yml .github/workflows/*.yaml 2>/dev/null | grep -A 5 -B 5 "go-version\|setup-go\|golang\|Go" | head -100Repository: akash-network/node
Length of output: 995
🏁 Script executed:
cat .github/workflows/tests.yamlRepository: akash-network/node
Length of output: 6938
🏁 Script executed:
cat .github/workflows/release.yamlRepository: akash-network/node
Length of output: 3506
🏁 Script executed:
rg "golang:" . --type dockerfile -A 2 -B 2Repository: akash-network/node
Length of output: 97
🏁 Script executed:
rg "FROM" _build/test.DockerfileRepository: akash-network/node
Length of output: 111
🏁 Script executed:
cat .github/actions/setup-ubuntu/action.ymlRepository: akash-network/node
Length of output: 134
🏁 Script executed:
cat .github/actions/setup-ubuntu/setup.sh 2>/dev/null || cat .github/actions/setup-ubuntu/*.sh 2>/dev/null || ls -la .github/actions/setup-ubuntu/Repository: akash-network/node
Length of output: 234
🏁 Script executed:
web_search
What Go version is included in golang:alpine latest image?Repository: akash-network/node
Length of output: 156
🏁 Script executed:
cat .github/actions/setup-ubuntu/action.yamlRepository: akash-network/node
Length of output: 1418
🌐 Web query:
golang:alpine Docker image Go version latest
💡 Result:
The official Docker image golang:alpine (latest Alpine variant) uses Go version 1.26.2. This is confirmed by the Dockerfile for the 1.26/alpine3.23 variant, which sets ENV GOLANG_VERSION 1.26.2, and Docker Hub tags listing 1.26-alpine3.23, 1-alpine3.23, alpine3.23, 1.26-alpine, 1-alpine, alpine all pointing to Go 1.26 (specifically 1.26.2). The alpine tag aliases to the latest stable Go on the latest Alpine (3.23), distinct from the non-Alpine latest (Debian-based trixie). To verify locally: docker run --rm golang:alpine go version outputs go version go1.26.2 linux/amd64 (or equivalent for your arch).
Citations:
- 1: https://hub.docker.com/_/golang/
- 2: https://hub.docker.com/_/golang?name=alpine&ordering=last_updated&page=1&tab=tags
- 3: https://github.com/docker-library/official-images/blob/master/library/golang
- 4: https://github.com/docker-library/repo-info/blob/master/repos/golang/remote/1-alpine.md
- 5: https://github.com/docker-library/golang/blob/f5af4bcde335960d1ecb2ce7e8693c88e5f0f5ec/1.26/alpine3.23/Dockerfile
🏁 Script executed:
cat script/tools.shRepository: akash-network/node
Length of output: 5320
🏁 Script executed:
./script/tools.sh gotoolchain 2>/dev/null || cat script/tools.sh | grep -A 5 -B 5 "gotoolchain"Repository: akash-network/node
Length of output: 70
Pin golang:alpine to Go 1.25.9 in _build/test.Dockerfile.
The go.mod declares Go 1.25.9, but _build/test.Dockerfile uses the unversioned golang:alpine image (currently Go 1.26.2). This creates a version mismatch. GitHub Actions CI workflows are properly pinned via setup-ubuntu → script/tools.sh gotoolchain → actions/setup-go, but the Docker image must explicitly specify the Go version to match.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@go.mod` at line 3, The Docker test image is unpinned (uses golang:alpine)
while go.mod specifies Go 1.25.9; update _build/test.Dockerfile to use the
matching image tag (e.g., golang:1.25.9-alpine) so the container Go runtime
matches go.mod's "go 1.25.9"; ensure any related build or FROM lines referencing
golang:alpine are replaced with the explicit version to avoid mismatches with
the go.mod declaration.
| GORELEASER_ARGS := --clean | ||
|
|
||
| ifeq (,$(findstring release,$(BUILDOPTS))) | ||
| GORELEASER_ARGS += --skip=publish,validate --snapshot | ||
| GITHUB_TOKEN= | ||
| endif | ||
|
|
||
| ifneq (,$(findstring verbose,$(BUILDOPTS))) | ||
| GORELEASER_ARGS += --verbose | ||
| endif | ||
|
|
||
| .PHONY: goreleaser | ||
| goreleaser: wasmvm-libs build-contracts | ||
| docker run \ | ||
| --rm \ | ||
| -e MOD="$(GOMOD)" \ | ||
| -e STABLE=$(IS_STABLE) \ | ||
| -e BUILD_TAGS="$(GORELEASER_TAGS)" \ | ||
| -e BUILD_LDFLAGS="$(GORELEASER_LDFLAGS)" \ | ||
| -e GITHUB_TOKEN="$(GITHUB_TOKEN)" \ | ||
| -e GORELEASER_CURRENT_TAG="$(RELEASE_TAG)" \ | ||
| -e DOCKER_IMAGE=$(RELEASE_DOCKER_IMAGE) \ | ||
| -e GOPATH=/go \ | ||
| -e GOTOOLCHAIN="$(GOTOOLCHAIN)" \ | ||
| -e GOWORK="$(GORELEASER_GOWORK)" \ | ||
| -e GITHUB_TOKEN="$(GITHUB_TOKEN)" \ | ||
| -e GOPATH=/go \ | ||
| -v /var/run/docker.sock:/var/run/docker.sock \ | ||
| -v $(GOPATH):/go \ | ||
| -v $(AKASH_ROOT):/go/src/$(GORELEASER_MOD_MOUNT) \ | ||
| -w /go/src/$(GORELEASER_MOD_MOUNT) \ | ||
| $(GORELEASER_IMAGE) \ | ||
| -f .goreleaser-test-bins.yaml \ | ||
| --verbose=$(GORELEASER_VERBOSE) \ | ||
| --clean \ | ||
| --skip=publish,validate \ | ||
| --snapshot | ||
| -f "$(GORELEASER_CONFIG)" \ | ||
| release \ | ||
| $(GORELEASER_ARGS) |
There was a problem hiding this comment.
Keep forwarding GORELEASER_SKIP in the shared target.
GORELEASER_SKIP is still assembled earlier in the file, but this command never appends it anymore. make … GORELEASER_SKIP=... becomes a no-op after this refactor.
Suggested fix
$(GORELEASER_IMAGE) \
-f "$(GORELEASER_CONFIG)" \
release \
- $(GORELEASER_ARGS)
+ $(GORELEASER_ARGS) \
+ $(GORELEASER_SKIP)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| GORELEASER_ARGS := --clean | |
| ifeq (,$(findstring release,$(BUILDOPTS))) | |
| GORELEASER_ARGS += --skip=publish,validate --snapshot | |
| GITHUB_TOKEN= | |
| endif | |
| ifneq (,$(findstring verbose,$(BUILDOPTS))) | |
| GORELEASER_ARGS += --verbose | |
| endif | |
| .PHONY: goreleaser | |
| goreleaser: wasmvm-libs build-contracts | |
| docker run \ | |
| --rm \ | |
| -e MOD="$(GOMOD)" \ | |
| -e STABLE=$(IS_STABLE) \ | |
| -e BUILD_TAGS="$(GORELEASER_TAGS)" \ | |
| -e BUILD_LDFLAGS="$(GORELEASER_LDFLAGS)" \ | |
| -e GITHUB_TOKEN="$(GITHUB_TOKEN)" \ | |
| -e GORELEASER_CURRENT_TAG="$(RELEASE_TAG)" \ | |
| -e DOCKER_IMAGE=$(RELEASE_DOCKER_IMAGE) \ | |
| -e GOPATH=/go \ | |
| -e GOTOOLCHAIN="$(GOTOOLCHAIN)" \ | |
| -e GOWORK="$(GORELEASER_GOWORK)" \ | |
| -e GITHUB_TOKEN="$(GITHUB_TOKEN)" \ | |
| -e GOPATH=/go \ | |
| -v /var/run/docker.sock:/var/run/docker.sock \ | |
| -v $(GOPATH):/go \ | |
| -v $(AKASH_ROOT):/go/src/$(GORELEASER_MOD_MOUNT) \ | |
| -w /go/src/$(GORELEASER_MOD_MOUNT) \ | |
| $(GORELEASER_IMAGE) \ | |
| -f .goreleaser-test-bins.yaml \ | |
| --verbose=$(GORELEASER_VERBOSE) \ | |
| --clean \ | |
| --skip=publish,validate \ | |
| --snapshot | |
| -f "$(GORELEASER_CONFIG)" \ | |
| release \ | |
| $(GORELEASER_ARGS) | |
| GORELEASER_ARGS := --clean | |
| ifeq (,$(findstring release,$(BUILDOPTS))) | |
| GORELEASER_ARGS += --skip=publish,validate --snapshot | |
| GITHUB_TOKEN= | |
| endif | |
| ifneq (,$(findstring verbose,$(BUILDOPTS))) | |
| GORELEASER_ARGS += --verbose | |
| endif | |
| .PHONY: goreleaser | |
| goreleaser: wasmvm-libs build-contracts | |
| docker run \ | |
| --rm \ | |
| -e MOD="$(GOMOD)" \ | |
| -e STABLE=$(IS_STABLE) \ | |
| -e BUILD_TAGS="$(GORELEASER_TAGS)" \ | |
| -e BUILD_LDFLAGS="$(GORELEASER_LDFLAGS)" \ | |
| -e GITHUB_TOKEN="$(GITHUB_TOKEN)" \ | |
| -e GORELEASER_CURRENT_TAG="$(RELEASE_TAG)" \ | |
| -e DOCKER_IMAGE=$(RELEASE_DOCKER_IMAGE) \ | |
| -e GOTOOLCHAIN="$(GOTOOLCHAIN)" \ | |
| -e GOWORK="$(GORELEASER_GOWORK)" \ | |
| -e GITHUB_TOKEN="$(GITHUB_TOKEN)" \ | |
| -e GOPATH=/go \ | |
| -v /var/run/docker.sock:/var/run/docker.sock \ | |
| -v $(GOPATH):/go \ | |
| -v $(AKASH_ROOT):/go/src/$(GORELEASER_MOD_MOUNT) \ | |
| -w /go/src/$(GORELEASER_MOD_MOUNT) \ | |
| $(GORELEASER_IMAGE) \ | |
| -f "$(GORELEASER_CONFIG)" \ | |
| release \ | |
| $(GORELEASER_ARGS) \ | |
| $(GORELEASER_SKIP) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@make/releasing.mk` around lines 55 - 88, The GORELEASER_SKIP value is built
earlier but never forwarded into GORELEASER_ARGS, so make invocations with
GORELEASER_SKIP become no-ops; fix this by appending GORELEASER_SKIP into the
argument list before use (e.g. add a line like GORELEASER_ARGS +=
$(GORELEASER_SKIP) above or inside the goreleaser target) so the goreleaser
target uses GORELEASER_ARGS (and optionally also export/pass GORELEASER_SKIP to
the docker environment if you need it inside the container).
| func (k *keeper) BeginBlocker(ctx sdk.Context) error { | ||
| start := telemetry.Now() | ||
| defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyBeginBlocker) | ||
| defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyEndBlocker) |
There was a problem hiding this comment.
BeginBlocker is emitting under the wrong telemetry key.
Line 14 records BeginBlocker duration using telemetry.MetricKeyEndBlocker, which will skew blocker-level metrics/dashboards.
🛠️ Proposed fix
- defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyEndBlocker)
+ defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyBeginBlocker)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyEndBlocker) | |
| defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyBeginBlocker) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/epochs/keeper/abci.go` at line 14, The telemetry call in BeginBlocker is
using the wrong metric key: change the deferred call
telemetry.ModuleMeasureSince(types.ModuleName, start,
telemetry.MetricKeyEndBlocker) (in the BeginBlocker function) to use
telemetry.MetricKeyBeginBlocker so the duration is recorded under the
BeginBlocker metric; leave types.ModuleName and start unchanged.
| iters := make([]storetypes.Iterator, 0, len(states)) | ||
| defer func() { | ||
| for _, it := range iters { | ||
| _ = it.Close() | ||
| } | ||
| }() | ||
|
|
||
| var idx int | ||
|
|
||
| for idx = range states { | ||
| state := types.State(states[idx]) | ||
|
|
||
| var err error | ||
| if idx > 0 { | ||
| req.Pagination.Key = nil | ||
| startKey = nil | ||
| } | ||
|
|
||
| if len(req.Pagination.Key) == 0 { | ||
| if startKey == nil { | ||
| req.State = state.String() | ||
|
|
||
| searchPrefix = BuildSearchPrefix(AccountPrefix, req.State, req.XID) | ||
| } | ||
|
|
||
| searchStore := prefix.NewStore(ctx.KVStore(k.skey), searchPrefix) | ||
| iter := searchStore.Iterator(startKey, nil) | ||
| iters = append(iters, iter) | ||
|
|
||
| count := uint64(0) | ||
|
|
||
| pageRes, err = sdkquery.FilteredPaginate(searchStore, req.Pagination, func(key []byte, value []byte, accumulate bool) (bool, error) { | ||
| id, _, err := ParseAccountKey(append(searchPrefix, key...)) | ||
| for ; iter.Valid() && req.Pagination.Limit > 0; iter.Next() { | ||
| id, _, err := ParseAccountKey(append(searchPrefix, iter.Key()...)) | ||
| if err != nil { | ||
| return true, err | ||
| } | ||
| acc := types.Account{ | ||
| ID: id, | ||
| return nil, status.Error(codes.Internal, err.Error()) | ||
| } | ||
|
|
||
| er := k.cdc.Unmarshal(value, &acc.State) | ||
| if er != nil { | ||
| return false, er | ||
| acc := types.Account{ID: id} | ||
|
|
||
| if err := k.cdc.Unmarshal(iter.Value(), &acc.State); err != nil { | ||
| return nil, status.Error(codes.Internal, err.Error()) | ||
| } | ||
|
|
||
| accounts = append(accounts, acc) | ||
| req.Pagination.Limit-- | ||
| count++ | ||
|
|
||
| return false, nil | ||
| }) | ||
| if err != nil { | ||
| return nil, status.Error(codes.Internal, err.Error()) | ||
| } | ||
|
|
||
| req.Pagination.Limit -= count | ||
| total += count | ||
|
|
||
| if req.Pagination.Limit == 0 { | ||
| if len(pageRes.NextKey) > 0 { | ||
| pageRes.NextKey, err = query.EncodePaginationKey(states[idx:], searchPrefix, pageRes.NextKey, nil) | ||
| if err != nil { | ||
| pageRes.Total = total | ||
| return &etypes.QueryAccountsResponse{ | ||
| Accounts: accounts, | ||
| Pagination: pageRes, | ||
| }, status.Error(codes.Internal, err.Error()) | ||
| } | ||
| // Page full and more items exist -- encode NextKey for continuation | ||
| if iter.Valid() && req.Pagination.Limit == 0 { | ||
| var err error | ||
| nextKey, err = query.EncodePaginationKey(states[idx:], searchPrefix, iter.Key(), nil) | ||
| if err != nil { | ||
| return nil, status.Error(codes.Internal, err.Error()) | ||
| } | ||
|
|
||
| break | ||
| } | ||
| } | ||
|
|
||
| pageRes.Total = total | ||
|
|
||
| return &etypes.QueryAccountsResponse{ | ||
| Accounts: accounts, | ||
| Pagination: pageRes, | ||
| Accounts: accounts, | ||
| Pagination: &sdkquery.PageResponse{ | ||
| Total: total, | ||
| NextKey: nextKey, | ||
| }, | ||
| }, nil |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In the Cosmos SDK version used by this repository, what behaviors are expected for query.PageRequest fields key, offset, count_total, and reverse, and does types/query.FilteredPaginate preserve those semantics?
💡 Result:
In Cosmos SDK, the query.PageRequest fields have the following expected behaviors, as defined in the protobuf structure across recent versions including main branch (post-2025 updates): - key: A value returned from PageResponse.next_key to start the next page query efficiently using key-based (cursor) pagination. Only one of key or offset should be set. When key is set, pagination uses optimized querying (e.g., iterator starting from key). - offset: A numeric offset for pagination when key is unavailable. Less efficient than key. Only one of offset or key should be set. When offset is used, pagination scans from the beginning and skips to the offset. - count_total: When true, signals to include the total count of items in PageResponse.total for UI pagination. Only respected when offset is used; ignored when key is set. - reverse: When true, results are returned in descending order. These semantics are standard across Cosmos SDK versions using this proto definition. types/query.FilteredPaginate (likely referring to query.FilteredPaginate or collections FilteredPaginate) is a helper for paginating iterators with an optional filter predicate. It aims to preserve PageRequest semantics but has known issues in some versions: - There was a bug (fixed Jan 2026 in #25718 after issue #25665) where offset was applied before filtering, causing incorrect pagination (e.g., skipping non-matching items before offset, leading to duplicate pages). Post-fix, offset applies to filtered results. - Earlier issues with count_total including unfiltered totals (e.g., #16661). Since today's date is 2026-04-27, recent versions (main branch) preserve the standard semantics after these fixes, using iterator-based pagination that respects key (start after key), offset (skip filtered items), reverse (via reverse iterators), and count_total (counts filtered matches when using offset). For key-based pagination with filters, it typically iterates from key and applies filter, preserving efficiency where possible. The exact repository version is unspecified, but standard Cosmos SDK behavior matches these proto expectations, with FilteredPaginate preserving them post-fixes.
Citations:
- 1: https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/query
- 2: https://github.com/cosmos/cosmos-sdk/blob/main/types/query/pagination.pb.go
- 3: https://docs.rs/cosmos-sdk-proto/0.26.1/cosmos_sdk_proto/cosmos/base/query/v1beta1/struct.PageRequest.html
- 4: https://pkg.go.dev/cosmossdk.io/api/cosmos/base/query/v1beta1
- 5: Incorrect pagination with predicateFunc in CollectionFilteredPaginate cosmos/cosmos-sdk#25665
- 6: feat: add query.GenericFilteredPaginated cosmos/cosmos-sdk#12253
- 7: Add filtered pagination cosmos/cosmos-sdk#6514
🏁 Script executed:
# First, find the file
fd grpc_query.go | grep escrowRepository: akash-network/node
Length of output: 92
🏁 Script executed:
# Read the relevant sections of the file
wc -l x/escrow/keeper/grpc_query.goRepository: akash-network/node
Length of output: 96
🏁 Script executed:
# Look at lines 79-143 and surrounding context
sed -n '70,150p' x/escrow/keeper/grpc_query.go | cat -nRepository: akash-network/node
Length of output: 2594
🏁 Script executed:
# Also check lines 197-261
sed -n '188,270p' x/escrow/keeper/grpc_query.go | cat -nRepository: akash-network/node
Length of output: 2584
🏁 Script executed:
# Search for any pagination field usage (Offset, Reverse, CountTotal)
rg -n "Offset|Reverse|CountTotal" x/escrow/keeper/grpc_query.goRepository: akash-network/node
Length of output: 44
🏁 Script executed:
# Search for FilteredPaginate usage in the codebase
rg -n "FilteredPaginate" --type goRepository: akash-network/node
Length of output: 604
🏁 Script executed:
# Get exact line numbers for both functions
grep -n "func (k Querier) Accounts\|func (k Querier) Payments" x/escrow/keeper/grpc_query.goRepository: akash-network/node
Length of output: 309
🏁 Script executed:
# Check if there are any TODOs or comments about pagination limitations
sed -n '1,150p' x/escrow/keeper/grpc_query.go | grep -i "pagination\|offset\|reverse\|count" | head -20Repository: akash-network/node
Length of output: 1065
🏁 Script executed:
# Verify the request structure to confirm pagination fields are available
rg -A 10 "type QueryAccountsRequest" x/escrow --type goRepository: akash-network/node
Length of output: 44
🏁 Script executed:
# Find the proto file for QueryAccountsRequest
fd "\.proto$" | xargs grep -l "QueryAccountsRequest"Repository: akash-network/node
Length of output: 46
🏁 Script executed:
# Check what pagination fields are actually available in the request
fd "\.proto$" x/escrow | xargs grep -A 15 "message QueryAccountsRequest"Repository: akash-network/node
Length of output: 46
🏁 Script executed:
# Also verify the structure by looking at generated code
rg -A 5 "QueryAccountsRequest struct" --type goRepository: akash-network/node
Length of output: 44
🏁 Script executed:
# Use git to find proto files
git ls-files | grep "\.proto$" | grep escrowRepository: akash-network/node
Length of output: 44
🏁 Script executed:
# Search for PageRequest usage in the escrow keeper
rg "PageRequest" x/escrow/keeper/grpc_query.go -B 2 -A 2Repository: akash-network/node
Length of output: 217
🏁 Script executed:
# Check if there's any documentation about why offset/reverse are not supported
rg -i "not.support\|unsupport\|offset.*not\|reverse.*not" x/escrow/Repository: akash-network/node
Length of output: 44
Silently ignoring unsupported pagination modes causes incorrect results.
These loops honor only Key and Limit. Offset, Reverse, and CountTotal are ignored, meaning callers specifying these modes receive incorrect data:
Offsetrequests bypass the offset entirelyReverserequests return items in forward orderCountTotalrequests get partial totals (current page only) instead of the full count
This violates Cosmos SDK PageRequest semantics. Add explicit validation to reject unsupported modes rather than silently serving incorrect results:
Recommended validation
+ if req.Pagination.Offset != 0 || req.Pagination.Reverse || req.Pagination.CountTotal {
+ return nil, status.Error(
+ codes.InvalidArgument,
+ "offset, reverse, and count_total pagination are not supported by this query path",
+ )
+ }Also applies to: Payments at line 146.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/escrow/keeper/grpc_query.go` around lines 79 - 143, The pagination handling
only supports Key and Limit but silently ignores Offset, Reverse and CountTotal;
add explicit validation at the start of the query handler(s) (the accounts loop
using req.Pagination and the analogous Payments handler) to reject unsupported
modes: if req.Pagination is non-nil and (req.Pagination.Offset != 0 ||
req.Pagination.Reverse || req.Pagination.CountTotal) return a gRPC
status.Error(codes.InvalidArgument, "<brief message>") (or similar) so callers
receive an error instead of incorrect results; place this check before building
searchPrefix/iterators (before using req.Pagination.Limit in the loop) and
reference the req.Pagination symbol in both the Accounts and Payments handlers
when implementing the validation.
| latestID, err := k.loadLatestPriceID(ctx, types.PriceDataID{ | ||
| Source: sourceID, | ||
| Denom: id.Denom, | ||
| BaseDenom: id.BaseDenom, | ||
| }) | ||
| if err != nil && !errors.Is(err, collections.ErrNotFound) { | ||
| return err | ||
| } | ||
|
|
||
| // timestamp of new datapoint must be newer than existing | ||
| // if this is the first data point, then it should be not older than 2 blocks back | ||
| if err == nil { | ||
| latest, err := k.prices.Get(ctx, types.PriceDataRecordID{ | ||
| Source: sourceID, | ||
| Denom: id.Denom, | ||
| BaseDenom: id.BaseDenom, | ||
| Height: latestHeight, | ||
| }) | ||
| // a record must exist at this point; any error means something went horribly wrong | ||
| if err != nil { | ||
| if err != nil { | ||
| if !errors.Is(err, collections.ErrNotFound) { | ||
| return err | ||
| } | ||
| if price.Timestamp.Before(latest.Timestamp) { | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrInvalidRequest, | ||
| "price timestamp is older than existing record", | ||
| ) | ||
| } | ||
| } else if ctx.BlockTime().Sub(price.Timestamp) > time.Second*12 { // fixme should be parameter | ||
|
|
||
| // applies only to very first record. | ||
| // set here reasonable window | ||
| latestID.Timestamp = ctx.BlockTime().Add(-(time.Second * 12)) | ||
| } | ||
|
|
||
| if latestID.Timestamp.After(timestamp) { | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrInvalidRequest, | ||
| "price timestamp is too old", | ||
| ) |
There was a problem hiding this comment.
Don’t hard-code a 12s freshness floor for the first sample.
For a brand-new (source, denom, baseDenom) pair, or after latestPriceID is cleared, the first submission is rejected whenever timestamp < blockTime-12s. That bypasses the module’s configured staleness policy and can drop otherwise valid oracle updates.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/oracle/keeper/keeper.go` around lines 197 - 217, The code hard-codes a 12s
freshness floor for the first sample by setting latestID.Timestamp =
ctx.BlockTime().Add(-(time.Second * 12)); replace this with the module's
configured staleness window: when k.loadLatestPriceID returns
collections.ErrNotFound treat it as the first record and compute the floor from
the oracle params (e.g. read the staleness window from
k.GetParams(ctx).<StalenessField> or the appropriate params accessor) and set
latestID.Timestamp = ctx.BlockTime().Add(-stalenessWindow) so the check
latestID.Timestamp.After(timestamp) uses the configured policy rather than a
fixed 12s. Ensure you reference and use the existing params accessor on keeper k
(or the correct params field name) rather than introducing a new hard-coded
constant.
| if totalWeight == 0 { | ||
| return sdkmath.LegacyZeroDec(), types.ErrTWAPZeroWeight | ||
| // All data points share the same timestamp (or there is exactly one | ||
| // point at 'now'). Fall back to a simple average so these sources | ||
| // still contribute to the TWAP rather than being silently dropped. | ||
| sum := sdkmath.LegacyZeroDec() | ||
| for _, dp := range dataPoints { | ||
| sum = sum.Add(dp.State.Price) | ||
| } | ||
| return sum.Quo(sdkmath.LegacyNewDec(int64(len(dataPoints)))), nil |
There was a problem hiding this comment.
Use the newest same-timestamp sample instead of averaging superseded ones.
When same-publish_time resubmissions land in the current block, totalWeight is 0 and this branch averages every sample together. Because dataPoints is already newest-first, the fallback should return the first entry; averaging reintroduces superseded prices.
Suggested fix
if totalWeight == 0 {
- // All data points share the same timestamp (or there is exactly one
- // point at 'now'). Fall back to a simple average so these sources
- // still contribute to the TWAP rather than being silently dropped.
- sum := sdkmath.LegacyZeroDec()
- for _, dp := range dataPoints {
- sum = sum.Add(dp.State.Price)
- }
- return sum.Quo(sdkmath.LegacyNewDec(int64(len(dataPoints)))), nil
+ return dataPoints[0].State.Price, nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if totalWeight == 0 { | |
| return sdkmath.LegacyZeroDec(), types.ErrTWAPZeroWeight | |
| // All data points share the same timestamp (or there is exactly one | |
| // point at 'now'). Fall back to a simple average so these sources | |
| // still contribute to the TWAP rather than being silently dropped. | |
| sum := sdkmath.LegacyZeroDec() | |
| for _, dp := range dataPoints { | |
| sum = sum.Add(dp.State.Price) | |
| } | |
| return sum.Quo(sdkmath.LegacyNewDec(int64(len(dataPoints)))), nil | |
| if totalWeight == 0 { | |
| return dataPoints[0].State.Price, nil | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/oracle/keeper/keeper.go` around lines 655 - 663, The fallback branch in the
TWAP computation (in keeper.go around the block referencing totalWeight and
dataPoints) currently averages all same-timestamp samples, which can reintroduce
superseded prices; change the fallback to return the newest same-timestamp
sample instead of averaging — i.e., when totalWeight == 0 return the first
dataPoints entry's price (dataPoints[0].State.Price) as the TWAP (ensure
dataPoints is non-empty before accessing it), rather than computing the mean.
| for _, key := range toDelete { | ||
| if err := k.prices.Remove(ctx, key); err != nil { | ||
| return count, err | ||
| } |
There was a problem hiding this comment.
Refresh latestPriceID when pruning deletes a prefix’s newest record.
This loop only removes rows from k.prices. If the latest sample for a (source, denom, baseDenom) prefix is also older than cutoff, the prune leaves latestPriceID pointing at a record that no longer exists.
Suggested fix
for _, key := range toDelete {
if err := k.prices.Remove(ctx, key); err != nil {
return count, err
}
}
+
+ latestKey := types.PriceDataID{
+ Source: source,
+ Denom: denom,
+ BaseDenom: baseDenom,
+ }
+ if latest, err := k.latestPriceID.Get(ctx, latestKey); err == nil && !latest.Timestamp.After(cutoff) {
+ if err := k.latestPriceID.Remove(ctx, latestKey); err != nil {
+ return count, err
+ }
+ }
return count, nil
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/oracle/keeper/prune.go` around lines 106 - 109, When deleting keys in the
for loop over toDelete, ensure you refresh the in-memory/latest index for that
(source, denom, baseDenom) prefix if you just removed its newest sample: after
k.prices.Remove(ctx, key) succeeds, check whether the deleted key equals the
stored latestPriceID for that prefix (use the same prefix extraction logic used
elsewhere), and if so query k.prices for the next-most-recent remaining record
for that prefix (or clear the entry) and update latestPriceID accordingly so it
never points at a removed record.
Signed-off-by: Artur Troian <troian@users.noreply.github.com>
Description
Closes: #XXXX
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow-up issues.
I have...
!to the type prefix if API or client breaking changeCHANGELOG.md