Skip to content

refactor: rename skills dir#2063

Closed
troian wants to merge 1 commit intomainfrom
skills
Closed

refactor: rename skills dir#2063
troian wants to merge 1 commit intomainfrom
skills

Conversation

@troian
Copy link
Copy Markdown
Member

@troian troian commented Apr 27, 2026

Description

Closes: #XXXX


Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow-up issues.

I have...

  • included the correct type prefix in the PR title
  • added ! to the type prefix if API or client breaking change
  • targeted the correct branch (see PR Targeting)
  • provided a link to the relevant issue or specification
  • included the necessary unit and integration tests
  • added a changelog entry to CHANGELOG.md
  • included comments for documenting Go code
  • updated the relevant documentation or specification
  • reviewed "Files changed" and left comments if necessary
  • confirmed all CI checks have passed

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 27, 2026

Walkthrough

This PR introduces Oracle module v2 with timestamp-based price storage replacing block-height indexing, establishes a new v2.1.0 upgrade with deployment and oracle migrations, removes the deprecated v2.0.0 upgrade, refactors BME ledger sequencing to use transient storage, updates the Pyth contract to v1.0.1, and modernizes build/release infrastructure while aligning test utilities and configuration examples.

Changes

Cohort / File(s) Summary
Oracle Module v1→v2 Migration
x/oracle/keeper/keeper.go, x/oracle/keeper/abci.go, x/oracle/keeper/codec.go, x/oracle/keeper/key.go, x/oracle/keeper/grpc_query.go, x/oracle/keeper/grpc_query_test.go, x/oracle/keeper/timestamp.go, x/oracle/keeper/timestamp_test.go, x/oracle/keeper/prune.go, x/oracle/alias.go, x/oracle/module.go, x/oracle/handler/server.go
Switches from block-height to timestamp-based price storage; refactors TWAP logic to work with time windows; adds transient per-DataID sequencing; implements epoch-driven price pruning; updates codec with legacy support and new timestamp serialization; changes aggregation to use calculateAggregatedPricesFromHistory; increments consensus version to 2.
Oracle Test Updates
tests/e2e/oracle_cli_test.go, tests/e2e/oracle_grpc_test.go, tests/e2e/pyth_contract_test.go, x/oracle/simulation/genesis.go, x/oracle/simulation/proposals.go, testutil/oracle/price_feeder.go
Updates imports and test code to use oracle/v2 types; adjusts staleness parameter from max_price_staleness_blocks to max_price_staleness_period; removes intermediate PriceDataState construction in test fixtures.
v2.0.0 Upgrade Removal
upgrades/software/v2.0.0/deployment.go, upgrades/software/v2.0.0/init.go, upgrades/software/v2.0.0/upgrade.go, upgrades/software/v2.0.0/wasm.go
Deletes the complete v2.0.0 upgrade implementation including denom migrations, WASM contract instantiation, oracle parameter setup, and fund transfers; total removal of 701 lines.
v2.1.0 Upgrade Implementation
upgrades/software/v2.1.0/init.go, upgrades/software/v2.1.0/upgrade.go, upgrades/software/v2.1.0/deployment.go, upgrades/software/v2.1.0/oracle.go
Introduces new v2.1.0 upgrade with deployment migration (clears pending denom migrations), oracle migration (clears KV store), and upgrade handler (migrates contracts, updates oracle params, adjusts BME settings, transfers escrow deposits for akashnet-2).
Upgrade Infrastructure
upgrades/upgrades.go, meta.json, tests/upgrade/test-cases.json, upgrades/CHANGELOG.md, script/upgrades.sh, script/wasm2go.sh
Switches upgrade registration from v2.0.0 to v2.1.0; adds v2.1.0 metadata with deployment v7→v8 and oracle v1→v2 migrations; updates upgrade test cases; adjusts script state snapshot handling and wasm code generation paths.
BME Module Refactoring
x/bme/keeper/keeper.go, x/bme/keeper/abci.go, x/bme/keeper/key.go, x/bme/keeper/burn_mint_test.go, x/bme/keeper/grpc_query.go, testutil/state/suite.go, x/bme/simulation/genesis.go, x/bme/simulation/proposals.go
Migrates ledger sequencing from in-memory scalar to transient collections.Item; adds error-to-cancel-reason mapping with fatal error classification; updates BeginBlocker to use transient store; extends burn/mint error handling with retry logic and max-attempts cancellation; updates BME keeper test coverage.
Deployment Module Updates
x/deployment/keeper/abci.go, x/deployment/keeper/keeper.go, x/deployment/keeper/grpc_query.go, x/deployment/module.go
Removes pending denom migration processing from EndBlocker; removes AddPendingDenomMigration method; increments ConsensusVersion from 7 to 8; improves error message formatting in params query.
Pyth Contract Upgrade
contracts/pyth/Cargo.toml, contracts/pyth/src/contract.rs, contracts/pyth/src/oracle.rs, contracts/pyth/src/integration_tests.rs
Bumps version from 1.0.0 to 1.0.1; updates to oracle v2 message types (/akash.oracle.v2.MsgAddPriceEntry); flattens MsgAddPriceEntry from nested PriceDataState to direct fields; allows equal publish_time values; updates test assertions to match new message structure.
Escrow Query Pagination
x/escrow/keeper/grpc_query.go
Refactors pagination from sdkquery.FilteredPaginate to direct KV store iterators with explicit loop control; improves pagination-key validation and bounds checking; updates next-key computation using query.EncodePaginationKey.
Market Module Updates
x/market/keeper/keeper.go, x/market/keeper/grpc_query.go
Refactors OnGroupClosed to use callback-based order iteration with centralized error handling; splits OrderActive/OrderOpen bid processing into separate passes; improves params query error message formatting.
Wasm Bindings & Filtering
x/wasm/bindings/akash_query.go, x/wasm/keeper/msg_filter.go
Updates OracleParams struct field from MaxPriceStalenessBlocks to MaxPriceStalenessPeriod; updates message filter to only allow oracle v2 MsgAddPriceEntry.
Build & Release Infrastructure
.github/workflows/release.yaml, make/releasing.mk, go.mod, Cargo.toml
Consolidates goreleaser Docker invocations into unified target with BUILDOPTS-driven argument construction; updates release changelog path handling; bumps Go version to 1.25.9; updates cosmos-sdk to v0.53.6; downgrades Rust toolchain requirement from 1.93.0 to 1.86.0.
Configuration & Documentation
_docs/pyth-integration.md, _run/init.sh, _run/node/deployment.yaml, _run/node/price-feeder.sh, _run/node/prop.json
Updates documentation and example configs to oracle v2 with max_price_staleness_period, retention settings, and pruning parameters; changes Pyth update_fee from 1000000 to 1000; refactors oracle registration to accept admin_addr and submit combined governance proposal; updates price feeder defaults to local chain and akt denom checking.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Oracle Keeper
    participant Transient Store
    participant Price Storage
    participant Epoch Hooks
    participant Price Pruning

    Client->>Oracle Keeper: AddPriceEntry(price, timestamp)
    Oracle Keeper->>Transient Store: Increment sequence for DataID
    Oracle Keeper->>Price Storage: Store price with timestamp + sequence
    Oracle Keeper->>Oracle Keeper: Update latest price ID (timestamp, sequence)
    
    Note over Client,Price Pruning: Price Aggregation
    Client->>Oracle Keeper: Query aggregated prices
    Oracle Keeper->>Price Storage: getTWAPHistory(timestamp range)
    Oracle Keeper->>Oracle Keeper: calculateAggregatedPricesFromHistory
    Oracle Keeper-->>Client: Return TWAP + health

    Note over Client,Price Pruning: Epoch-Driven Pruning
    Epoch Hooks->>Oracle Keeper: AfterEpochEnd (when PruneEpoch matches)
    Oracle Keeper->>Oracle Keeper: Compute cutoff time = blockTime - PriceRetention
    Oracle Keeper->>Price Storage: prunePrices(cutoff, maxPerEpoch)
    Price Storage-->>Oracle Keeper: Delete old records, return count
    Oracle Keeper->>Epoch Hooks: Log pruning results
Loading
sequenceDiagram
    participant Pyth Contract
    participant Contract Storage
    participant Oracle Message
    participant Keeper
    participant Cosmos Bank

    Pyth Contract->>Contract Storage: Receive update with new publish_time
    Contract Storage->>Contract Storage: Load stored PRICE_FEED
    
    alt publish_time < stored publish_time
        Contract Storage->>Contract Storage: Reject (stale)
    else publish_time >= stored publish_time
        Contract Storage->>Contract Storage: Update PRICE_FEED
        Contract Storage->>Oracle Message: Build MsgAddPriceEntry (v2)
        Oracle Message->>Keeper: AddPriceEntry(price, timestamp)
        Keeper->>Cosmos Bank: Execute price update
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 From heights of blocks to moments in time,
The oracle now dances in rhythm sublime!
With timestamps encoded and pruning so keen,
The v2 migration's a beautiful scene,
Where ledgers skip memory for stores most serene. ⏰✨

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title 'refactor: rename skills dir' does not match the extensive changes across oracle, BME, market, deployment, and other modules documented in the commit messages and file changes. Update the title to accurately reflect the major changes, such as 'refactor: oracle v2 migration with time-based pricing, BME improvements, and module updates' or provide a more comprehensive summary of the primary changes.
Description check ❓ Inconclusive The PR description contains only the default template with author checklist items and a placeholder issue reference. No actual description of the changes or their purpose was provided by the author. Replace the template content with a meaningful description explaining the PR's objectives, key changes, and impact across the affected modules.
✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 89.47% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch skills

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
_run/node/price-feeder.sh (1)

80-86: ⚠️ Potential issue | 🟠 Major

Denom change inconsistent with threshold and log messages.

The balance query now filters for akt instead of uakt, but:

  • Line 82 still uses threshold 100000 (appropriate for micro-units, not whole tokens)
  • Lines 83 and 85 still log with uakt suffix

If the chain now uses akt as the base denom (unlikely for Cosmos SDK chains where micro-units are standard), the threshold needs adjustment. If this is an error, revert to uakt.

🐛 Proposed fix (assuming uakt is correct)
-	balance=$(akash query bank balances "$address" -o json 2>/dev/null | jq -r '.balances[] | select(.denom=="akt") | .amount // "0"')
+	balance=$(akash query bank balances "$address" -o json 2>/dev/null | jq -r '.balances[] | select(.denom=="uakt") | .amount // "0"')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@_run/node/price-feeder.sh` around lines 80 - 86, The denom used in the jq
filter is inconsistent with the threshold and log suffix: update the bank query
so the balance variable matches micro-denom usage (change the jq
select(.denom=="akt") back to select(.denom=="uakt")), leaving the numeric
threshold (100000) and the log calls that append "uakt" unchanged; ensure the
command that sets the balance variable (the akash query bank balances ... | jq
...) and the subsequent log "WARN"/"INFO" calls consistently reference the same
denom.
_docs/pyth-integration.md (1)

696-738: ⚠️ Potential issue | 🟠 Major

Use protobuf duration strings in the v2 proposal example and table.

The repo’s v2 payloads now use duration strings ("60s", "30s", etc.), but this guide still shows raw integers and block-based wording here. Copying this example as-is will give operators stale or invalid guidance.

Suggested doc update
-        "max_price_staleness_period": 60,
-        "twap_window": 180,
+        "max_price_staleness_period": "60s",
+        "twap_window": "<duration>",

-| `max_price_staleness_period` | i64      | Max age in seconds (~6s/block)   | `60`            |
-| `twap_window`                | i64      | TWAP calculation window (blocks) | `180` (~18 min) |
+| `max_price_staleness_period` | duration | Max age before a price is stale  | `"60s"`         |
+| `twap_window`                | duration | TWAP calculation window          | `"<duration>"`  |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@_docs/pyth-integration.md` around lines 696 - 738, The example and table use
integer seconds/blocks for MsgUpdateParams fields max_price_staleness_period and
twap_window but v2 expects protobuf duration strings; update the JSON snippet
and the Oracle Parameters table to use duration strings (e.g.
"max_price_staleness_period": "60s", "twap_window": "180s" or other
human-readable durations) and remove block-based wording, ensuring all
references to these fields in the example and table reflect the duration-string
format required by the v2 payloads.
contracts/pyth/src/oracle.rs (1)

58-72: ⚠️ Potential issue | 🟡 Minor

Add validation for timestamp_nanos in the constructor.

google.protobuf.Timestamp.nanos must be in the range 0..999_999_999. The constructor accepts any i32 value without validation, so callers can construct a message that encodes successfully locally but fails when submitted to the chain. Validate the range before assigning the field.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@contracts/pyth/src/oracle.rs` around lines 58 - 72, The constructor new
currently accepts any i32 for timestamp_nanos; add a range check to ensure
timestamp_nanos is between 0 and 999_999_999 (inclusive) before assigning it to
the struct. Modify new (the constructor for the oracle struct) to validate
timestamp_nanos and return an Err result (or otherwise propagate a clear error)
if the value is out of range, rather than allowing invalid values to be set, so
callers cannot construct messages that will fail on-chain; keep the DataID::new
call and other fields unchanged.
x/oracle/keeper/keeper.go (1)

568-622: ⚠️ Potential issue | 🟠 Major

Base aggregate metadata on the sources that actually produced a TWAP.

This loop skips sources whose calculateTWAP fails, but MedianPrice, MinPrice, MaxPrice, and NumSources are still derived from latestPrices. That can overstate source health and mix failed sources back into the aggregate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/oracle/keeper/keeper.go` around lines 568 - 622, The aggregation currently
computes MedianPrice, MinPrice, MaxPrice, and NumSources from latestPrices even
when some sources failed TWAP; change the code to only include sources that
successfully produced a TWAP: when iterating over latestPrices and calling
calculateTWAP(record) append both the twap and the corresponding source record
(e.g., successfulSources) so later use successfulSources (not latestPrices) as
input to calculateMedian, the min/max loop, and to set aggregated.NumSources;
keep aggregateTWAP computed from twaps as before and use calculateDeviationBps
on the min/max derived from successfulSources, and ensure Timestamp and other
fields remain set as before.
🧹 Nitpick comments (4)
x/oracle/keeper/codec.go (1)

373-377: Fix misleading comment.

The comment says "big-endian uint64" but the code correctly uses PutUint32 for the Source field. This inconsistency could confuse future maintainers.

📝 Proposed fix
 func (d priceDataRecordIDLegacyCodec) Encode(buffer []byte, key types.PriceDataRecordID) (int, error) {
 	offset := 0
-	// Write source id as big-endian uint64 (for proper ordering)
+	// Write source id as big-endian uint32 (for proper ordering)
 	binary.BigEndian.PutUint32(buffer, key.Source)
 	offset += 4
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/oracle/keeper/codec.go` around lines 373 - 377, The comment in
priceDataRecordIDLegacyCodec.Encode incorrectly states "big-endian uint64" while
the code uses binary.BigEndian.PutUint32 to write key.Source; update the comment
to accurately describe that Source is written as a big-endian uint32 (or remove
the incorrect size reference) so it matches the implementation (see function
priceDataRecordIDLegacyCodec.Encode and the PutUint32 call writing key.Source).
upgrades/software/v2.1.0/upgrade.go (1)

109-123: Potential unintended error return.

At line 122, return toVM, err returns the err variable, which at this point holds the result from up.Keepers.Akash.Bme.SetParams. However, if MaxPendingAttempts != 0 (the if block is skipped), err still holds the value from bparams fetch at line 109. If that succeeded, this is fine, but the logic is subtle.

Consider explicitly returning nil for clarity:

📝 Proposed fix for clarity
 		if bparams.MaxPendingAttempts == 0 {
 			bparams.MaxPendingAttempts = 3
 			err = up.Keepers.Akash.Bme.SetParams(sctx, bparams)
 			if err != nil {
 				return toVM, fmt.Errorf("failed to set bme params: %w", err)
 			}
 		}
 
-		return toVM, err
+		return toVM, nil
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@upgrades/software/v2.1.0/upgrade.go` around lines 109 - 123, The final return
currently returns the variable err which may hold a stale value from
up.Keepers.Akash.Bme.GetParams or SetParams; change the final return in the
function (after the MaxPendingAttempts handling) to explicitly return nil for
the error (i.e., return toVM, nil) so callers don't receive an unintended error
when no error actually occurred; update the return at the end of the block that
follows the bparams logic where bparams, err :=
up.Keepers.Akash.Bme.GetParams(sctx) and the subsequent SetParams call are
handled.
x/oracle/keeper/abci.go (1)

34-38: Consider returning error instead of silently continuing.

When GetParams fails, the function logs and returns nil, effectively skipping all price aggregation. This could silently mask misconfiguration or storage issues. Consider whether failing loudly (returning the error) would be more appropriate for a consensus-critical operation.

♻️ Alternative: return the error
 	params, err := k.GetParams(sctx)
 	if err != nil {
 		sctx.Logger().Error("failed to get oracle params", "error", err)
-		return nil
+		return err
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/oracle/keeper/abci.go` around lines 34 - 38, The handler currently swallows
GetParams errors by logging and returning nil; instead propagate the failure so
consensus can react: replace the current log-and-return-nil behavior after
params, err := k.GetParams(sctx) with returning the error (or a wrapped error)
to the caller (e.g., return fmt.Errorf("GetParams: %w", err) or just return err)
so callers of this function can fail loudly; ensure you update the function
signature/return path if necessary to propagate the error from this function
rather than continuing silently (refer to GetParams and sctx.Logger().Error in
abci.go).
x/oracle/keeper/timestamp.go (1)

87-93: Consider adding input validation in readDigits.

The function assumes all bytes in the buffer are ASCII digits ('0'-'9'). If a malformed buffer is passed (e.g., containing non-digit characters), the result will be silently incorrect rather than returning an error. Since DecodeTimestamp is used for parsing stored keys, malformed data would propagate as invalid timestamps.

♻️ Proposed validation
 func readDigits(buf []byte) int {
 	n := 0
 	for _, b := range buf {
+		if b < '0' || b > '9' {
+			return -1 // or consider returning (int, error)
+		}
 		n = n*10 + int(b-'0')
 	}
 	return n
 }

Then validate the return value in DecodeTimestamp.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/oracle/keeper/timestamp.go` around lines 87 - 93, readDigits currently
assumes every byte is '0'–'9' and returns a silently wrong value on malformed
input; change readDigits to validate each byte (returning an (int, error) pair
or a sentinel error) when a non-digit is encountered and update DecodeTimestamp
to call the new readDigits signature and propagate/handle the error (e.g.,
return a decoding error) so malformed stored keys produce a clear error instead
of incorrect timestamps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@_run/node/deployment.yaml`:
- Around line 43-45: The deployment pricing denom is set to the wrong token
string: change the 'denom' value under the 'web' pricing block from "uact" back
to "uakt" so it matches the chain's configured DENOM; locate the 'web' pricing
stanza (keys 'web', 'denom', 'amount') in the deployment YAML and update the
denom literal to "uakt".

In `@_run/node/price-feeder.sh`:
- Around line 13-14: The script references AKASH_NODE (e.g., in the status URL
and log "Node: $AKASH_NODE") but the default assignment is commented out,
causing an unbound variable error under set -euo pipefail; fix by reintroducing
a sensible default assignment for AKASH_NODE in price-feeder.sh (for example a
local RPC like http://localhost:26657 or another project-appropriate default) or
otherwise ensure AKASH_NODE is exported before use, and update the top-of-file
assignment so AKASH_NODE="${AKASH_NODE:=<default>}" is present and used by the
status and log references.

In `@contracts/pyth/src/contract.rs`:
- Around line 257-267: The branch that updates PRICE_FEED currently sets
price_feed.prev_publish_time = price_feed.publish_time even when publish_time ==
price_feed.publish_time, which causes prev_publish_time to lose the last
distinct observation; change the logic in the if block (the publish_time >=
price_feed.publish_time check) so that you still accept equal publish_time
updates for price, conf and expo, but only set price_feed.prev_publish_time =
price_feed.publish_time when publish_time > price_feed.publish_time (strictly
greater); then assign price_feed.price/conf/expo/publish_time and call
PRICE_FEED.save(deps.storage, &price_feed)? as before.

In `@go.mod`:
- Line 3: The Docker test image is unpinned (uses golang:alpine) while go.mod
specifies Go 1.25.9; update _build/test.Dockerfile to use the matching image tag
(e.g., golang:1.25.9-alpine) so the container Go runtime matches go.mod's "go
1.25.9"; ensure any related build or FROM lines referencing golang:alpine are
replaced with the explicit version to avoid mismatches with the go.mod
declaration.

In `@make/releasing.mk`:
- Around line 55-88: The GORELEASER_SKIP value is built earlier but never
forwarded into GORELEASER_ARGS, so make invocations with GORELEASER_SKIP become
no-ops; fix this by appending GORELEASER_SKIP into the argument list before use
(e.g. add a line like GORELEASER_ARGS += $(GORELEASER_SKIP) above or inside the
goreleaser target) so the goreleaser target uses GORELEASER_ARGS (and optionally
also export/pass GORELEASER_SKIP to the docker environment if you need it inside
the container).

In `@script/upgrades.sh`:
- Around line 599-601: The unconditional copy cp -r "$srcval/wasm" ./ can cause
the script to exit under set -e if that directory is missing; change it to check
for the wasm directory before copying (e.g., test -d "$srcval/wasm" or [ -e
"$srcval/wasm" ] and only run the cp when present) so the rm -rf "wasm", cp -r
"$srcval/data" ./, and subsequent operations won’t hard fail when $srcval/wasm
is absent.

In `@script/wasm2go.sh`:
- Line 12: Update the script header usage comment to reflect the new target path
for the generated Go file (v2.1.0) so it matches the OUTPUT_FILE value; locate
the OUTPUT_FILE="upgrades/software/v2.1.0/contracts.go" assignment and update
the earlier usage/comment text that still references v2.0.0/contracts.go (and
the similar comment around the other occurrence at the block near line 40) to
say v2.1.0/contracts.go.

In `@testutil/oracle/price_feeder.go`:
- Around line 45-49: The params setup uses raw integers for
MaxPriceStalenessPeriod and TwapWindow which are interpreted as nanoseconds;
update the assignments in the params returned by oraclev1.DefaultParams() (the
params variable where you set params.Sources, params.MinPriceSources) to use
time.Duration literals, e.g. set params.MaxPriceStalenessPeriod and
params.TwapWindow by multiplying the intended seconds by time.Second (e.g. 1000
* time.Second and 10 * time.Second or the correct intended durations) so
downstream uses of these fields (e.g., in keeper logic that calls
now.Add(-params.MaxPriceStalenessPeriod)) operate with the correct time units.

In `@upgrades/CHANGELOG.md`:
- Around line 43-49: The markdown has a heading level mismatch around the
"v2.1.0" and "Description" headings; make the levels consistent (no jumps) by
adjusting the heading markers for either the "v2.1.0" heading or the
"Description" heading so they match the surrounding sections (e.g., use the same
number of # for "v2.1.0" and "Description" or increment/decrement by only one
level) to satisfy markdownlint MD001.

In `@upgrades/software/v2.1.0/oracle.go`:
- Around line 29-42: The current code in oracle.go materializes all keys into
keys before deleting, which can OOM; change to a batched delete loop: iterate
with store.Iterator(nil, nil) but collect at most a fixed batchSize (e.g.
500-1000) into a temporary slice (batchKeys) using bytes.Clone(iter.Key()), then
close the iterator (check iter.Close() error), loop over batchKeys calling
store.Delete(key), and repeat by creating a new iterator starting after the last
deleted key (use the last cloned key as the new start bound) until no more keys
remain; ensure you still check and propagate iter.Close() errors and remove the
single monolithic keys slice usage.

In `@x/epochs/keeper/abci.go`:
- Line 14: The telemetry call in BeginBlocker is using the wrong metric key:
change the deferred call telemetry.ModuleMeasureSince(types.ModuleName, start,
telemetry.MetricKeyEndBlocker) (in the BeginBlocker function) to use
telemetry.MetricKeyBeginBlocker so the duration is recorded under the
BeginBlocker metric; leave types.ModuleName and start unchanged.

In `@x/escrow/keeper/grpc_query.go`:
- Around line 79-143: The pagination handling only supports Key and Limit but
silently ignores Offset, Reverse and CountTotal; add explicit validation at the
start of the query handler(s) (the accounts loop using req.Pagination and the
analogous Payments handler) to reject unsupported modes: if req.Pagination is
non-nil and (req.Pagination.Offset != 0 || req.Pagination.Reverse ||
req.Pagination.CountTotal) return a gRPC status.Error(codes.InvalidArgument,
"<brief message>") (or similar) so callers receive an error instead of incorrect
results; place this check before building searchPrefix/iterators (before using
req.Pagination.Limit in the loop) and reference the req.Pagination symbol in
both the Accounts and Payments handlers when implementing the validation.

In `@x/oracle/keeper/keeper.go`:
- Around line 197-217: The code hard-codes a 12s freshness floor for the first
sample by setting latestID.Timestamp = ctx.BlockTime().Add(-(time.Second * 12));
replace this with the module's configured staleness window: when
k.loadLatestPriceID returns collections.ErrNotFound treat it as the first record
and compute the floor from the oracle params (e.g. read the staleness window
from k.GetParams(ctx).<StalenessField> or the appropriate params accessor) and
set latestID.Timestamp = ctx.BlockTime().Add(-stalenessWindow) so the check
latestID.Timestamp.After(timestamp) uses the configured policy rather than a
fixed 12s. Ensure you reference and use the existing params accessor on keeper k
(or the correct params field name) rather than introducing a new hard-coded
constant.
- Around line 655-663: The fallback branch in the TWAP computation (in keeper.go
around the block referencing totalWeight and dataPoints) currently averages all
same-timestamp samples, which can reintroduce superseded prices; change the
fallback to return the newest same-timestamp sample instead of averaging — i.e.,
when totalWeight == 0 return the first dataPoints entry's price
(dataPoints[0].State.Price) as the TWAP (ensure dataPoints is non-empty before
accessing it), rather than computing the mean.

In `@x/oracle/keeper/prune.go`:
- Around line 106-109: When deleting keys in the for loop over toDelete, ensure
you refresh the in-memory/latest index for that (source, denom, baseDenom)
prefix if you just removed its newest sample: after k.prices.Remove(ctx, key)
succeeds, check whether the deleted key equals the stored latestPriceID for that
prefix (use the same prefix extraction logic used elsewhere), and if so query
k.prices for the next-most-recent remaining record for that prefix (or clear the
entry) and update latestPriceID accordingly so it never points at a removed
record.

---

Outside diff comments:
In `@_docs/pyth-integration.md`:
- Around line 696-738: The example and table use integer seconds/blocks for
MsgUpdateParams fields max_price_staleness_period and twap_window but v2 expects
protobuf duration strings; update the JSON snippet and the Oracle Parameters
table to use duration strings (e.g. "max_price_staleness_period": "60s",
"twap_window": "180s" or other human-readable durations) and remove block-based
wording, ensuring all references to these fields in the example and table
reflect the duration-string format required by the v2 payloads.

In `@_run/node/price-feeder.sh`:
- Around line 80-86: The denom used in the jq filter is inconsistent with the
threshold and log suffix: update the bank query so the balance variable matches
micro-denom usage (change the jq select(.denom=="akt") back to
select(.denom=="uakt")), leaving the numeric threshold (100000) and the log
calls that append "uakt" unchanged; ensure the command that sets the balance
variable (the akash query bank balances ... | jq ...) and the subsequent log
"WARN"/"INFO" calls consistently reference the same denom.

In `@contracts/pyth/src/oracle.rs`:
- Around line 58-72: The constructor new currently accepts any i32 for
timestamp_nanos; add a range check to ensure timestamp_nanos is between 0 and
999_999_999 (inclusive) before assigning it to the struct. Modify new (the
constructor for the oracle struct) to validate timestamp_nanos and return an Err
result (or otherwise propagate a clear error) if the value is out of range,
rather than allowing invalid values to be set, so callers cannot construct
messages that will fail on-chain; keep the DataID::new call and other fields
unchanged.

In `@x/oracle/keeper/keeper.go`:
- Around line 568-622: The aggregation currently computes MedianPrice, MinPrice,
MaxPrice, and NumSources from latestPrices even when some sources failed TWAP;
change the code to only include sources that successfully produced a TWAP: when
iterating over latestPrices and calling calculateTWAP(record) append both the
twap and the corresponding source record (e.g., successfulSources) so later use
successfulSources (not latestPrices) as input to calculateMedian, the min/max
loop, and to set aggregated.NumSources; keep aggregateTWAP computed from twaps
as before and use calculateDeviationBps on the min/max derived from
successfulSources, and ensure Timestamp and other fields remain set as before.

---

Nitpick comments:
In `@upgrades/software/v2.1.0/upgrade.go`:
- Around line 109-123: The final return currently returns the variable err which
may hold a stale value from up.Keepers.Akash.Bme.GetParams or SetParams; change
the final return in the function (after the MaxPendingAttempts handling) to
explicitly return nil for the error (i.e., return toVM, nil) so callers don't
receive an unintended error when no error actually occurred; update the return
at the end of the block that follows the bparams logic where bparams, err :=
up.Keepers.Akash.Bme.GetParams(sctx) and the subsequent SetParams call are
handled.

In `@x/oracle/keeper/abci.go`:
- Around line 34-38: The handler currently swallows GetParams errors by logging
and returning nil; instead propagate the failure so consensus can react: replace
the current log-and-return-nil behavior after params, err := k.GetParams(sctx)
with returning the error (or a wrapped error) to the caller (e.g., return
fmt.Errorf("GetParams: %w", err) or just return err) so callers of this function
can fail loudly; ensure you update the function signature/return path if
necessary to propagate the error from this function rather than continuing
silently (refer to GetParams and sctx.Logger().Error in abci.go).

In `@x/oracle/keeper/codec.go`:
- Around line 373-377: The comment in priceDataRecordIDLegacyCodec.Encode
incorrectly states "big-endian uint64" while the code uses
binary.BigEndian.PutUint32 to write key.Source; update the comment to accurately
describe that Source is written as a big-endian uint32 (or remove the incorrect
size reference) so it matches the implementation (see function
priceDataRecordIDLegacyCodec.Encode and the PutUint32 call writing key.Source).

In `@x/oracle/keeper/timestamp.go`:
- Around line 87-93: readDigits currently assumes every byte is '0'–'9' and
returns a silently wrong value on malformed input; change readDigits to validate
each byte (returning an (int, error) pair or a sentinel error) when a non-digit
is encountered and update DecodeTimestamp to call the new readDigits signature
and propagate/handle the error (e.g., return a decoding error) so malformed
stored keys produce a clear error instead of incorrect timestamps.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: de5264ca-a624-4d0b-9075-e3632fb21ee6

📥 Commits

Reviewing files that changed from the base of the PR and between 5505bfd and 1be67bc.

⛔ Files ignored due to path filters (2)
  • Cargo.lock is excluded by !**/*.lock
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (72)
  • .agents/skills/setup-env/SKILL.md
  • .agents/skills/smartcontracts/SKILL.md
  • .agents/skills/vanity/SKILL.md
  • .github/workflows/release.yaml
  • .goreleaser.yaml
  • Cargo.toml
  • _docs/pyth-integration.md
  • _run/init.sh
  • _run/node/deployment.yaml
  • _run/node/price-feeder.sh
  • _run/node/prop.json
  • app/ante.go
  • app/types/app.go
  • contracts/pyth/Cargo.toml
  • contracts/pyth/src/contract.rs
  • contracts/pyth/src/integration_tests.rs
  • contracts/pyth/src/oracle.rs
  • go.mod
  • make/releasing.mk
  • meta.json
  • script/upgrades.sh
  • script/wasm2go.sh
  • tests/e2e/oracle_cli_test.go
  • tests/e2e/oracle_grpc_test.go
  • tests/e2e/pyth_contract_test.go
  • tests/upgrade/test-cases.json
  • testutil/oracle/price_feeder.go
  • testutil/state/suite.go
  • upgrades/CHANGELOG.md
  • upgrades/software/v2.0.0/contracts.go
  • upgrades/software/v2.0.0/deployment.go
  • upgrades/software/v2.0.0/init.go
  • upgrades/software/v2.0.0/upgrade.go
  • upgrades/software/v2.0.0/wasm.go
  • upgrades/software/v2.1.0/contracts.go
  • upgrades/software/v2.1.0/deployment.go
  • upgrades/software/v2.1.0/init.go
  • upgrades/software/v2.1.0/oracle.go
  • upgrades/software/v2.1.0/upgrade.go
  • upgrades/upgrades.go
  • x/bme/keeper/abci.go
  • x/bme/keeper/burn_mint_test.go
  • x/bme/keeper/grpc_query.go
  • x/bme/keeper/keeper.go
  • x/bme/keeper/key.go
  • x/bme/simulation/genesis.go
  • x/bme/simulation/proposals.go
  • x/deployment/keeper/abci.go
  • x/deployment/keeper/grpc_query.go
  • x/deployment/keeper/keeper.go
  • x/deployment/module.go
  • x/epochs/keeper/abci.go
  • x/escrow/keeper/grpc_query.go
  • x/market/keeper/grpc_query.go
  • x/market/keeper/keeper.go
  • x/oracle/alias.go
  • x/oracle/handler/server.go
  • x/oracle/keeper/abci.go
  • x/oracle/keeper/codec.go
  • x/oracle/keeper/genesis.go
  • x/oracle/keeper/grpc_query.go
  • x/oracle/keeper/grpc_query_test.go
  • x/oracle/keeper/keeper.go
  • x/oracle/keeper/key.go
  • x/oracle/keeper/prune.go
  • x/oracle/keeper/timestamp.go
  • x/oracle/keeper/timestamp_test.go
  • x/oracle/module.go
  • x/oracle/simulation/genesis.go
  • x/oracle/simulation/proposals.go
  • x/wasm/bindings/akash_query.go
  • x/wasm/keeper/msg_filter.go
💤 Files with no reviewable changes (5)
  • x/deployment/keeper/keeper.go
  • upgrades/software/v2.0.0/init.go
  • upgrades/software/v2.0.0/wasm.go
  • upgrades/software/v2.0.0/upgrade.go
  • upgrades/software/v2.0.0/deployment.go

Comment thread _run/node/deployment.yaml
Comment on lines +43 to 45
web:
denom: uact
amount: 1000
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# 1) Check denom usage consistency in YAML manifests
rg -n --iglob '*.yml' --iglob '*.yaml' '\b(uakt|uact)\b' -C2

# 2) Check docs/examples for expected denom references
rg -n '\b(uakt|uact)\b' -C2 README.md docs _run 2>/dev/null || true

Repository: akash-network/node

Length of output: 12005


Change uact back to uakt in deployment pricing denom.

Line 44 sets the pricing denom to uact, but the chain is configured with DENOM=uakt (see _run/node/docker-compose.yaml:74, _run/init.sh:35, and _run/common.mk:36). This mismatch will cause bidding/payment transactions to fail at runtime. Align with the chain's configured token denom.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@_run/node/deployment.yaml` around lines 43 - 45, The deployment pricing denom
is set to the wrong token string: change the 'denom' value under the 'web'
pricing block from "uact" back to "uakt" so it matches the chain's configured
DENOM; locate the 'web' pricing stanza (keys 'web', 'denom', 'amount') in the
deployment YAML and update the denom literal to "uakt".

Comment thread _run/node/price-feeder.sh
Comment on lines +13 to +14
AKASH_CHAIN_ID="${AKASH_CHAIN_ID:=local}"
#AKASH_NODE="${AKASH_NODE:=https://testnetrpc.akashnet.net:443}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Undefined AKASH_NODE will cause script failure.

AKASH_NODE is commented out but still referenced on line 123 (${AKASH_NODE}/status) and line 191 (log "INFO" "Node: $AKASH_NODE"). With set -euo pipefail enabled, the script will fail with an unbound variable error if AKASH_NODE is not provided externally.

Either provide a default value or remove the comment:

🐛 Proposed fix
 AKASH_CHAIN_ID="${AKASH_CHAIN_ID:=local}"
-#AKASH_NODE="${AKASH_NODE:=https://testnetrpc.akashnet.net:443}"
+AKASH_NODE="${AKASH_NODE:=http://localhost:26657}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@_run/node/price-feeder.sh` around lines 13 - 14, The script references
AKASH_NODE (e.g., in the status URL and log "Node: $AKASH_NODE") but the default
assignment is commented out, causing an unbound variable error under set -euo
pipefail; fix by reintroducing a sensible default assignment for AKASH_NODE in
price-feeder.sh (for example a local RPC like http://localhost:26657 or another
project-appropriate default) or otherwise ensure AKASH_NODE is exported before
use, and update the top-of-file assignment so
AKASH_NODE="${AKASH_NODE:=<default>}" is present and used by the status and log
references.

Comment on lines +257 to +267
// Update contract storage for same or newer timestamps.
// Pyth may submit multiple prices within the same timestamp
// but from different slots — all should be accepted.
if publish_time >= price_feed.publish_time {
price_feed.prev_publish_time = price_feed.publish_time;
price_feed.price = price;
price_feed.conf = conf;
price_feed.expo = expo;
price_feed.publish_time = publish_time;

PRICE_FEED.save(deps.storage, &price_feed)?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve prev_publish_time on same-timestamp overwrites.

Equal publish_time updates are intentionally accepted, but this branch also advances prev_publish_time. After a same-timestamp overwrite, prev_publish_time becomes equal to publish_time, so the contract loses the last distinct observation time.

Proposed fix
-    if publish_time >= price_feed.publish_time {
-        price_feed.prev_publish_time = price_feed.publish_time;
+    if publish_time >= price_feed.publish_time {
+        if publish_time > price_feed.publish_time {
+            price_feed.prev_publish_time = price_feed.publish_time;
+        }
         price_feed.price = price;
         price_feed.conf = conf;
         price_feed.expo = expo;
         price_feed.publish_time = publish_time;
 
         PRICE_FEED.save(deps.storage, &price_feed)?;
     }

Based on learnings, same publish_time Pyth attestations intentionally use last-on-chain-wins semantics and may overwrite each other.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Update contract storage for same or newer timestamps.
// Pyth may submit multiple prices within the same timestamp
// but from different slots — all should be accepted.
if publish_time >= price_feed.publish_time {
price_feed.prev_publish_time = price_feed.publish_time;
price_feed.price = price;
price_feed.conf = conf;
price_feed.expo = expo;
price_feed.publish_time = publish_time;
PRICE_FEED.save(deps.storage, &price_feed)?;
// Update contract storage for same or newer timestamps.
// Pyth may submit multiple prices within the same timestamp
// but from different slots — all should be accepted.
if publish_time >= price_feed.publish_time {
if publish_time > price_feed.publish_time {
price_feed.prev_publish_time = price_feed.publish_time;
}
price_feed.price = price;
price_feed.conf = conf;
price_feed.expo = expo;
price_feed.publish_time = publish_time;
PRICE_FEED.save(deps.storage, &price_feed)?;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@contracts/pyth/src/contract.rs` around lines 257 - 267, The branch that
updates PRICE_FEED currently sets price_feed.prev_publish_time =
price_feed.publish_time even when publish_time == price_feed.publish_time, which
causes prev_publish_time to lose the last distinct observation; change the logic
in the if block (the publish_time >= price_feed.publish_time check) so that you
still accept equal publish_time updates for price, conf and expo, but only set
price_feed.prev_publish_time = price_feed.publish_time when publish_time >
price_feed.publish_time (strictly greater); then assign
price_feed.price/conf/expo/publish_time and call PRICE_FEED.save(deps.storage,
&price_feed)? as before.

Comment thread go.mod
module pkg.akt.dev/node/v2

go 1.25.5
go 1.25.9
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify toolchain pinning across CI/release config for Go 1.25.9
set -euo pipefail

echo "== Files that pin Go versions =="
rg -n --iglob '*{yml,yaml,toml,json,Dockerfile,md}' 'go-version|golang:|GO_VERSION|toolchain|setup-go|actions/setup-go|1\.25\.9|1\.25'

echo
echo "== go.mod/go.work directives =="
rg -n '^\s*go\s+[0-9]+\.[0-9]+(\.[0-9]+)?|^\s*toolchain\s+go[0-9]+\.[0-9]+(\.[0-9]+)?' --iglob 'go.mod' --iglob 'go.work'

Repository: akash-network/node

Length of output: 407


🏁 Script executed:

find .github/workflows -type f -name "*.yml" -o -name "*.yaml" 2>/dev/null | head -20

Repository: akash-network/node

Length of output: 315


🏁 Script executed:

find . -type f \( -name "Dockerfile*" -o -name "docker-compose*" \) | head -20

Repository: akash-network/node

Length of output: 130


🏁 Script executed:

rg -l "setup-go|go-version|golang:" . --type yaml --type yml 2>/dev/null | head -20

Repository: akash-network/node

Length of output: 44


🏁 Script executed:

cat .github/workflows/*.yml .github/workflows/*.yaml 2>/dev/null | grep -A 5 -B 5 "go-version\|setup-go\|golang\|Go" | head -100

Repository: akash-network/node

Length of output: 995


🏁 Script executed:

cat .github/workflows/tests.yaml

Repository: akash-network/node

Length of output: 6938


🏁 Script executed:

cat .github/workflows/release.yaml

Repository: akash-network/node

Length of output: 3506


🏁 Script executed:

rg "golang:" . --type dockerfile -A 2 -B 2

Repository: akash-network/node

Length of output: 97


🏁 Script executed:

rg "FROM" _build/test.Dockerfile

Repository: akash-network/node

Length of output: 111


🏁 Script executed:

cat .github/actions/setup-ubuntu/action.yml

Repository: akash-network/node

Length of output: 134


🏁 Script executed:

cat .github/actions/setup-ubuntu/setup.sh 2>/dev/null || cat .github/actions/setup-ubuntu/*.sh 2>/dev/null || ls -la .github/actions/setup-ubuntu/

Repository: akash-network/node

Length of output: 234


🏁 Script executed:

web_search
What Go version is included in golang:alpine latest image?

Repository: akash-network/node

Length of output: 156


🏁 Script executed:

cat .github/actions/setup-ubuntu/action.yaml

Repository: akash-network/node

Length of output: 1418


🌐 Web query:

golang:alpine Docker image Go version latest

💡 Result:

The official Docker image golang:alpine (latest Alpine variant) uses Go version 1.26.2. This is confirmed by the Dockerfile for the 1.26/alpine3.23 variant, which sets ENV GOLANG_VERSION 1.26.2, and Docker Hub tags listing 1.26-alpine3.23, 1-alpine3.23, alpine3.23, 1.26-alpine, 1-alpine, alpine all pointing to Go 1.26 (specifically 1.26.2). The alpine tag aliases to the latest stable Go on the latest Alpine (3.23), distinct from the non-Alpine latest (Debian-based trixie). To verify locally: docker run --rm golang:alpine go version outputs go version go1.26.2 linux/amd64 (or equivalent for your arch).

Citations:


🏁 Script executed:

cat script/tools.sh

Repository: akash-network/node

Length of output: 5320


🏁 Script executed:

./script/tools.sh gotoolchain 2>/dev/null || cat script/tools.sh | grep -A 5 -B 5 "gotoolchain"

Repository: akash-network/node

Length of output: 70


Pin golang:alpine to Go 1.25.9 in _build/test.Dockerfile.

The go.mod declares Go 1.25.9, but _build/test.Dockerfile uses the unversioned golang:alpine image (currently Go 1.26.2). This creates a version mismatch. GitHub Actions CI workflows are properly pinned via setup-ubuntu → script/tools.sh gotoolchain → actions/setup-go, but the Docker image must explicitly specify the Go version to match.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@go.mod` at line 3, The Docker test image is unpinned (uses golang:alpine)
while go.mod specifies Go 1.25.9; update _build/test.Dockerfile to use the
matching image tag (e.g., golang:1.25.9-alpine) so the container Go runtime
matches go.mod's "go 1.25.9"; ensure any related build or FROM lines referencing
golang:alpine are replaced with the explicit version to avoid mismatches with
the go.mod declaration.

Comment thread make/releasing.mk
Comment on lines +55 to +88
GORELEASER_ARGS := --clean

ifeq (,$(findstring release,$(BUILDOPTS)))
GORELEASER_ARGS += --skip=publish,validate --snapshot
GITHUB_TOKEN=
endif

ifneq (,$(findstring verbose,$(BUILDOPTS)))
GORELEASER_ARGS += --verbose
endif

.PHONY: goreleaser
goreleaser: wasmvm-libs build-contracts
docker run \
--rm \
-e MOD="$(GOMOD)" \
-e STABLE=$(IS_STABLE) \
-e BUILD_TAGS="$(GORELEASER_TAGS)" \
-e BUILD_LDFLAGS="$(GORELEASER_LDFLAGS)" \
-e GITHUB_TOKEN="$(GITHUB_TOKEN)" \
-e GORELEASER_CURRENT_TAG="$(RELEASE_TAG)" \
-e DOCKER_IMAGE=$(RELEASE_DOCKER_IMAGE) \
-e GOPATH=/go \
-e GOTOOLCHAIN="$(GOTOOLCHAIN)" \
-e GOWORK="$(GORELEASER_GOWORK)" \
-e GITHUB_TOKEN="$(GITHUB_TOKEN)" \
-e GOPATH=/go \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(GOPATH):/go \
-v $(AKASH_ROOT):/go/src/$(GORELEASER_MOD_MOUNT) \
-w /go/src/$(GORELEASER_MOD_MOUNT) \
$(GORELEASER_IMAGE) \
-f .goreleaser-test-bins.yaml \
--verbose=$(GORELEASER_VERBOSE) \
--clean \
--skip=publish,validate \
--snapshot
-f "$(GORELEASER_CONFIG)" \
release \
$(GORELEASER_ARGS)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep forwarding GORELEASER_SKIP in the shared target.

GORELEASER_SKIP is still assembled earlier in the file, but this command never appends it anymore. make … GORELEASER_SKIP=... becomes a no-op after this refactor.

Suggested fix
 		$(GORELEASER_IMAGE) \
 		-f "$(GORELEASER_CONFIG)" \
 		release \
-		$(GORELEASER_ARGS)
+		$(GORELEASER_ARGS) \
+		$(GORELEASER_SKIP)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
GORELEASER_ARGS := --clean
ifeq (,$(findstring release,$(BUILDOPTS)))
GORELEASER_ARGS += --skip=publish,validate --snapshot
GITHUB_TOKEN=
endif
ifneq (,$(findstring verbose,$(BUILDOPTS)))
GORELEASER_ARGS += --verbose
endif
.PHONY: goreleaser
goreleaser: wasmvm-libs build-contracts
docker run \
--rm \
-e MOD="$(GOMOD)" \
-e STABLE=$(IS_STABLE) \
-e BUILD_TAGS="$(GORELEASER_TAGS)" \
-e BUILD_LDFLAGS="$(GORELEASER_LDFLAGS)" \
-e GITHUB_TOKEN="$(GITHUB_TOKEN)" \
-e GORELEASER_CURRENT_TAG="$(RELEASE_TAG)" \
-e DOCKER_IMAGE=$(RELEASE_DOCKER_IMAGE) \
-e GOPATH=/go \
-e GOTOOLCHAIN="$(GOTOOLCHAIN)" \
-e GOWORK="$(GORELEASER_GOWORK)" \
-e GITHUB_TOKEN="$(GITHUB_TOKEN)" \
-e GOPATH=/go \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(GOPATH):/go \
-v $(AKASH_ROOT):/go/src/$(GORELEASER_MOD_MOUNT) \
-w /go/src/$(GORELEASER_MOD_MOUNT) \
$(GORELEASER_IMAGE) \
-f .goreleaser-test-bins.yaml \
--verbose=$(GORELEASER_VERBOSE) \
--clean \
--skip=publish,validate \
--snapshot
-f "$(GORELEASER_CONFIG)" \
release \
$(GORELEASER_ARGS)
GORELEASER_ARGS := --clean
ifeq (,$(findstring release,$(BUILDOPTS)))
GORELEASER_ARGS += --skip=publish,validate --snapshot
GITHUB_TOKEN=
endif
ifneq (,$(findstring verbose,$(BUILDOPTS)))
GORELEASER_ARGS += --verbose
endif
.PHONY: goreleaser
goreleaser: wasmvm-libs build-contracts
docker run \
--rm \
-e MOD="$(GOMOD)" \
-e STABLE=$(IS_STABLE) \
-e BUILD_TAGS="$(GORELEASER_TAGS)" \
-e BUILD_LDFLAGS="$(GORELEASER_LDFLAGS)" \
-e GITHUB_TOKEN="$(GITHUB_TOKEN)" \
-e GORELEASER_CURRENT_TAG="$(RELEASE_TAG)" \
-e DOCKER_IMAGE=$(RELEASE_DOCKER_IMAGE) \
-e GOTOOLCHAIN="$(GOTOOLCHAIN)" \
-e GOWORK="$(GORELEASER_GOWORK)" \
-e GITHUB_TOKEN="$(GITHUB_TOKEN)" \
-e GOPATH=/go \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(GOPATH):/go \
-v $(AKASH_ROOT):/go/src/$(GORELEASER_MOD_MOUNT) \
-w /go/src/$(GORELEASER_MOD_MOUNT) \
$(GORELEASER_IMAGE) \
-f "$(GORELEASER_CONFIG)" \
release \
$(GORELEASER_ARGS) \
$(GORELEASER_SKIP)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@make/releasing.mk` around lines 55 - 88, The GORELEASER_SKIP value is built
earlier but never forwarded into GORELEASER_ARGS, so make invocations with
GORELEASER_SKIP become no-ops; fix this by appending GORELEASER_SKIP into the
argument list before use (e.g. add a line like GORELEASER_ARGS +=
$(GORELEASER_SKIP) above or inside the goreleaser target) so the goreleaser
target uses GORELEASER_ARGS (and optionally also export/pass GORELEASER_SKIP to
the docker environment if you need it inside the container).

Comment thread x/epochs/keeper/abci.go
func (k *keeper) BeginBlocker(ctx sdk.Context) error {
start := telemetry.Now()
defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyBeginBlocker)
defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyEndBlocker)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

BeginBlocker is emitting under the wrong telemetry key.

Line 14 records BeginBlocker duration using telemetry.MetricKeyEndBlocker, which will skew blocker-level metrics/dashboards.

🛠️ Proposed fix
-	defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyEndBlocker)
+	defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyBeginBlocker)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyEndBlocker)
defer telemetry.ModuleMeasureSince(types.ModuleName, start, telemetry.MetricKeyBeginBlocker)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/epochs/keeper/abci.go` at line 14, The telemetry call in BeginBlocker is
using the wrong metric key: change the deferred call
telemetry.ModuleMeasureSince(types.ModuleName, start,
telemetry.MetricKeyEndBlocker) (in the BeginBlocker function) to use
telemetry.MetricKeyBeginBlocker so the duration is recorded under the
BeginBlocker metric; leave types.ModuleName and start unchanged.

Comment on lines +79 to 143
iters := make([]storetypes.Iterator, 0, len(states))
defer func() {
for _, it := range iters {
_ = it.Close()
}
}()

var idx int

for idx = range states {
state := types.State(states[idx])

var err error
if idx > 0 {
req.Pagination.Key = nil
startKey = nil
}

if len(req.Pagination.Key) == 0 {
if startKey == nil {
req.State = state.String()

searchPrefix = BuildSearchPrefix(AccountPrefix, req.State, req.XID)
}

searchStore := prefix.NewStore(ctx.KVStore(k.skey), searchPrefix)
iter := searchStore.Iterator(startKey, nil)
iters = append(iters, iter)

count := uint64(0)

pageRes, err = sdkquery.FilteredPaginate(searchStore, req.Pagination, func(key []byte, value []byte, accumulate bool) (bool, error) {
id, _, err := ParseAccountKey(append(searchPrefix, key...))
for ; iter.Valid() && req.Pagination.Limit > 0; iter.Next() {
id, _, err := ParseAccountKey(append(searchPrefix, iter.Key()...))
if err != nil {
return true, err
}
acc := types.Account{
ID: id,
return nil, status.Error(codes.Internal, err.Error())
}

er := k.cdc.Unmarshal(value, &acc.State)
if er != nil {
return false, er
acc := types.Account{ID: id}

if err := k.cdc.Unmarshal(iter.Value(), &acc.State); err != nil {
return nil, status.Error(codes.Internal, err.Error())
}

accounts = append(accounts, acc)
req.Pagination.Limit--
count++

return false, nil
})
if err != nil {
return nil, status.Error(codes.Internal, err.Error())
}

req.Pagination.Limit -= count
total += count

if req.Pagination.Limit == 0 {
if len(pageRes.NextKey) > 0 {
pageRes.NextKey, err = query.EncodePaginationKey(states[idx:], searchPrefix, pageRes.NextKey, nil)
if err != nil {
pageRes.Total = total
return &etypes.QueryAccountsResponse{
Accounts: accounts,
Pagination: pageRes,
}, status.Error(codes.Internal, err.Error())
}
// Page full and more items exist -- encode NextKey for continuation
if iter.Valid() && req.Pagination.Limit == 0 {
var err error
nextKey, err = query.EncodePaginationKey(states[idx:], searchPrefix, iter.Key(), nil)
if err != nil {
return nil, status.Error(codes.Internal, err.Error())
}

break
}
}

pageRes.Total = total

return &etypes.QueryAccountsResponse{
Accounts: accounts,
Pagination: pageRes,
Accounts: accounts,
Pagination: &sdkquery.PageResponse{
Total: total,
NextKey: nextKey,
},
}, nil
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In the Cosmos SDK version used by this repository, what behaviors are expected for query.PageRequest fields key, offset, count_total, and reverse, and does types/query.FilteredPaginate preserve those semantics?

💡 Result:

In Cosmos SDK, the query.PageRequest fields have the following expected behaviors, as defined in the protobuf structure across recent versions including main branch (post-2025 updates): - key: A value returned from PageResponse.next_key to start the next page query efficiently using key-based (cursor) pagination. Only one of key or offset should be set. When key is set, pagination uses optimized querying (e.g., iterator starting from key). - offset: A numeric offset for pagination when key is unavailable. Less efficient than key. Only one of offset or key should be set. When offset is used, pagination scans from the beginning and skips to the offset. - count_total: When true, signals to include the total count of items in PageResponse.total for UI pagination. Only respected when offset is used; ignored when key is set. - reverse: When true, results are returned in descending order. These semantics are standard across Cosmos SDK versions using this proto definition. types/query.FilteredPaginate (likely referring to query.FilteredPaginate or collections FilteredPaginate) is a helper for paginating iterators with an optional filter predicate. It aims to preserve PageRequest semantics but has known issues in some versions: - There was a bug (fixed Jan 2026 in #25718 after issue #25665) where offset was applied before filtering, causing incorrect pagination (e.g., skipping non-matching items before offset, leading to duplicate pages). Post-fix, offset applies to filtered results. - Earlier issues with count_total including unfiltered totals (e.g., #16661). Since today's date is 2026-04-27, recent versions (main branch) preserve the standard semantics after these fixes, using iterator-based pagination that respects key (start after key), offset (skip filtered items), reverse (via reverse iterators), and count_total (counts filtered matches when using offset). For key-based pagination with filters, it typically iterates from key and applies filter, preserving efficiency where possible. The exact repository version is unspecified, but standard Cosmos SDK behavior matches these proto expectations, with FilteredPaginate preserving them post-fixes.

Citations:


🏁 Script executed:

# First, find the file
fd grpc_query.go | grep escrow

Repository: akash-network/node

Length of output: 92


🏁 Script executed:

# Read the relevant sections of the file
wc -l x/escrow/keeper/grpc_query.go

Repository: akash-network/node

Length of output: 96


🏁 Script executed:

# Look at lines 79-143 and surrounding context
sed -n '70,150p' x/escrow/keeper/grpc_query.go | cat -n

Repository: akash-network/node

Length of output: 2594


🏁 Script executed:

# Also check lines 197-261
sed -n '188,270p' x/escrow/keeper/grpc_query.go | cat -n

Repository: akash-network/node

Length of output: 2584


🏁 Script executed:

# Search for any pagination field usage (Offset, Reverse, CountTotal)
rg -n "Offset|Reverse|CountTotal" x/escrow/keeper/grpc_query.go

Repository: akash-network/node

Length of output: 44


🏁 Script executed:

# Search for FilteredPaginate usage in the codebase
rg -n "FilteredPaginate" --type go

Repository: akash-network/node

Length of output: 604


🏁 Script executed:

# Get exact line numbers for both functions
grep -n "func (k Querier) Accounts\|func (k Querier) Payments" x/escrow/keeper/grpc_query.go

Repository: akash-network/node

Length of output: 309


🏁 Script executed:

# Check if there are any TODOs or comments about pagination limitations
sed -n '1,150p' x/escrow/keeper/grpc_query.go | grep -i "pagination\|offset\|reverse\|count" | head -20

Repository: akash-network/node

Length of output: 1065


🏁 Script executed:

# Verify the request structure to confirm pagination fields are available
rg -A 10 "type QueryAccountsRequest" x/escrow --type go

Repository: akash-network/node

Length of output: 44


🏁 Script executed:

# Find the proto file for QueryAccountsRequest
fd "\.proto$" | xargs grep -l "QueryAccountsRequest"

Repository: akash-network/node

Length of output: 46


🏁 Script executed:

# Check what pagination fields are actually available in the request
fd "\.proto$" x/escrow | xargs grep -A 15 "message QueryAccountsRequest"

Repository: akash-network/node

Length of output: 46


🏁 Script executed:

# Also verify the structure by looking at generated code
rg -A 5 "QueryAccountsRequest struct" --type go

Repository: akash-network/node

Length of output: 44


🏁 Script executed:

# Use git to find proto files
git ls-files | grep "\.proto$" | grep escrow

Repository: akash-network/node

Length of output: 44


🏁 Script executed:

# Search for PageRequest usage in the escrow keeper
rg "PageRequest" x/escrow/keeper/grpc_query.go -B 2 -A 2

Repository: akash-network/node

Length of output: 217


🏁 Script executed:

# Check if there's any documentation about why offset/reverse are not supported
rg -i "not.support\|unsupport\|offset.*not\|reverse.*not" x/escrow/

Repository: akash-network/node

Length of output: 44


Silently ignoring unsupported pagination modes causes incorrect results.

These loops honor only Key and Limit. Offset, Reverse, and CountTotal are ignored, meaning callers specifying these modes receive incorrect data:

  • Offset requests bypass the offset entirely
  • Reverse requests return items in forward order
  • CountTotal requests get partial totals (current page only) instead of the full count

This violates Cosmos SDK PageRequest semantics. Add explicit validation to reject unsupported modes rather than silently serving incorrect results:

Recommended validation
+ if req.Pagination.Offset != 0 || req.Pagination.Reverse || req.Pagination.CountTotal {
+ 	return nil, status.Error(
+ 		codes.InvalidArgument,
+ 		"offset, reverse, and count_total pagination are not supported by this query path",
+ 	)
+ }

Also applies to: Payments at line 146.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/escrow/keeper/grpc_query.go` around lines 79 - 143, The pagination handling
only supports Key and Limit but silently ignores Offset, Reverse and CountTotal;
add explicit validation at the start of the query handler(s) (the accounts loop
using req.Pagination and the analogous Payments handler) to reject unsupported
modes: if req.Pagination is non-nil and (req.Pagination.Offset != 0 ||
req.Pagination.Reverse || req.Pagination.CountTotal) return a gRPC
status.Error(codes.InvalidArgument, "<brief message>") (or similar) so callers
receive an error instead of incorrect results; place this check before building
searchPrefix/iterators (before using req.Pagination.Limit in the loop) and
reference the req.Pagination symbol in both the Accounts and Payments handlers
when implementing the validation.

Comment thread x/oracle/keeper/keeper.go
Comment on lines +197 to 217
latestID, err := k.loadLatestPriceID(ctx, types.PriceDataID{
Source: sourceID,
Denom: id.Denom,
BaseDenom: id.BaseDenom,
})
if err != nil && !errors.Is(err, collections.ErrNotFound) {
return err
}

// timestamp of new datapoint must be newer than existing
// if this is the first data point, then it should be not older than 2 blocks back
if err == nil {
latest, err := k.prices.Get(ctx, types.PriceDataRecordID{
Source: sourceID,
Denom: id.Denom,
BaseDenom: id.BaseDenom,
Height: latestHeight,
})
// a record must exist at this point; any error means something went horribly wrong
if err != nil {
if err != nil {
if !errors.Is(err, collections.ErrNotFound) {
return err
}
if price.Timestamp.Before(latest.Timestamp) {
return errorsmod.Wrap(
sdkerrors.ErrInvalidRequest,
"price timestamp is older than existing record",
)
}
} else if ctx.BlockTime().Sub(price.Timestamp) > time.Second*12 { // fixme should be parameter

// applies only to very first record.
// set here reasonable window
latestID.Timestamp = ctx.BlockTime().Add(-(time.Second * 12))
}

if latestID.Timestamp.After(timestamp) {
return errorsmod.Wrap(
sdkerrors.ErrInvalidRequest,
"price timestamp is too old",
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t hard-code a 12s freshness floor for the first sample.

For a brand-new (source, denom, baseDenom) pair, or after latestPriceID is cleared, the first submission is rejected whenever timestamp < blockTime-12s. That bypasses the module’s configured staleness policy and can drop otherwise valid oracle updates.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/oracle/keeper/keeper.go` around lines 197 - 217, The code hard-codes a 12s
freshness floor for the first sample by setting latestID.Timestamp =
ctx.BlockTime().Add(-(time.Second * 12)); replace this with the module's
configured staleness window: when k.loadLatestPriceID returns
collections.ErrNotFound treat it as the first record and compute the floor from
the oracle params (e.g. read the staleness window from
k.GetParams(ctx).<StalenessField> or the appropriate params accessor) and set
latestID.Timestamp = ctx.BlockTime().Add(-stalenessWindow) so the check
latestID.Timestamp.After(timestamp) uses the configured policy rather than a
fixed 12s. Ensure you reference and use the existing params accessor on keeper k
(or the correct params field name) rather than introducing a new hard-coded
constant.

Comment thread x/oracle/keeper/keeper.go
Comment on lines 655 to +663
if totalWeight == 0 {
return sdkmath.LegacyZeroDec(), types.ErrTWAPZeroWeight
// All data points share the same timestamp (or there is exactly one
// point at 'now'). Fall back to a simple average so these sources
// still contribute to the TWAP rather than being silently dropped.
sum := sdkmath.LegacyZeroDec()
for _, dp := range dataPoints {
sum = sum.Add(dp.State.Price)
}
return sum.Quo(sdkmath.LegacyNewDec(int64(len(dataPoints)))), nil
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the newest same-timestamp sample instead of averaging superseded ones.

When same-publish_time resubmissions land in the current block, totalWeight is 0 and this branch averages every sample together. Because dataPoints is already newest-first, the fallback should return the first entry; averaging reintroduces superseded prices.

Suggested fix
 	if totalWeight == 0 {
-		// All data points share the same timestamp (or there is exactly one
-		// point at 'now').  Fall back to a simple average so these sources
-		// still contribute to the TWAP rather than being silently dropped.
-		sum := sdkmath.LegacyZeroDec()
-		for _, dp := range dataPoints {
-			sum = sum.Add(dp.State.Price)
-		}
-		return sum.Quo(sdkmath.LegacyNewDec(int64(len(dataPoints)))), nil
+		return dataPoints[0].State.Price, nil
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if totalWeight == 0 {
return sdkmath.LegacyZeroDec(), types.ErrTWAPZeroWeight
// All data points share the same timestamp (or there is exactly one
// point at 'now'). Fall back to a simple average so these sources
// still contribute to the TWAP rather than being silently dropped.
sum := sdkmath.LegacyZeroDec()
for _, dp := range dataPoints {
sum = sum.Add(dp.State.Price)
}
return sum.Quo(sdkmath.LegacyNewDec(int64(len(dataPoints)))), nil
if totalWeight == 0 {
return dataPoints[0].State.Price, nil
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/oracle/keeper/keeper.go` around lines 655 - 663, The fallback branch in the
TWAP computation (in keeper.go around the block referencing totalWeight and
dataPoints) currently averages all same-timestamp samples, which can reintroduce
superseded prices; change the fallback to return the newest same-timestamp
sample instead of averaging — i.e., when totalWeight == 0 return the first
dataPoints entry's price (dataPoints[0].State.Price) as the TWAP (ensure
dataPoints is non-empty before accessing it), rather than computing the mean.

Comment thread x/oracle/keeper/prune.go
Comment on lines +106 to +109
for _, key := range toDelete {
if err := k.prices.Remove(ctx, key); err != nil {
return count, err
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Refresh latestPriceID when pruning deletes a prefix’s newest record.

This loop only removes rows from k.prices. If the latest sample for a (source, denom, baseDenom) prefix is also older than cutoff, the prune leaves latestPriceID pointing at a record that no longer exists.

Suggested fix
 	for _, key := range toDelete {
 		if err := k.prices.Remove(ctx, key); err != nil {
 			return count, err
 		}
 	}
+
+	latestKey := types.PriceDataID{
+		Source:    source,
+		Denom:     denom,
+		BaseDenom: baseDenom,
+	}
+	if latest, err := k.latestPriceID.Get(ctx, latestKey); err == nil && !latest.Timestamp.After(cutoff) {
+		if err := k.latestPriceID.Remove(ctx, latestKey); err != nil {
+			return count, err
+		}
+	}
 
 	return count, nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/oracle/keeper/prune.go` around lines 106 - 109, When deleting keys in the
for loop over toDelete, ensure you refresh the in-memory/latest index for that
(source, denom, baseDenom) prefix if you just removed its newest sample: after
k.prices.Remove(ctx, key) succeeds, check whether the deleted key equals the
stored latestPriceID for that prefix (use the same prefix extraction logic used
elsewhere), and if so query k.prices for the next-most-recent remaining record
for that prefix (or clear the entry) and update latestPriceID accordingly so it
never points at a removed record.

Signed-off-by: Artur Troian <troian@users.noreply.github.com>
@troian troian changed the title Skills refactor: rename skills dir Apr 27, 2026
@troian troian closed this Apr 27, 2026
@troian troian deleted the skills branch April 27, 2026 15:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant