Add release verification gates#2
Conversation
There was a problem hiding this comment.
Pull request overview
Adds release verification “gates” to ensure PAD shebang state normalization, artifact-first checks, and post-publish smoke verification are enforced via scripts and GitHub Actions workflows.
Changes:
- Introduces new Bun-based verification scripts for artifact, release-prepublish, and post-publish validation.
- Adds a shebang normalization tool to manage local/branch/main/release modes and wires it into CI.
- Updates release automation to verify artifacts before publish and validate the published package after.
Reviewed changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
scripts/verify-release.ts |
Prepublish gate: block already-published versions, verify shebang release state, run artifact verification. |
scripts/verify-published.ts |
Post-publish smoke checks: wait for dist-tag, validate CLI version, run example PADs, verify CDN assets. |
scripts/verify-artifact.ts |
Artifact-first validation: pack contents validation + smoke-running PAD server against source and packed tarball. |
scripts/shebangs.ts |
Defines and enforces shebang/asset-base state transitions for local/branch/main/release. |
package.json |
Adds scripts for shebang management and verification commands. |
note.tsx |
Enables asset base rewriting + shebang command normalization support. |
examples/demo2.pad.svg |
Updates embedded “Run with …” guidance to match managed shebang state. |
examples/checklist.pad.md |
Updates PAD shebang to match managed shebang state. |
examples/checklist.pad.html |
Updates PAD shebang and asset URLs to match managed shebang state. |
README.md |
Updates documented commands/shebangs to match managed shebang state. |
.github/workflows/verify.yml |
Adds PR/main artifact verification workflow. |
.github/workflows/shebangs.yml |
Adds workflow to normalize and auto-commit shebang state for PR branches and main. |
.github/workflows/release.yml |
Updates release pipeline to run release verification and post-publish verification. |
.github/workflows/prepare-release.yml |
Adds a manual “prepare release” workflow to set release state, verify, commit, tag, and push. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| async function exercisePadServer(command: Array<string>, file: string, label: string) { | ||
| const proc = Bun.spawn(command, { | ||
| cwd: repoRoot, | ||
| env: { | ||
| ...process.env, | ||
| NOTE_PAD_NO_OPEN: "1", | ||
| NOTE_PAD_IDLE_MS: "200", | ||
| NOTE_PAD_FIRST_CLIENT_TIMEOUT_MS: "8000", | ||
| }, | ||
| stdout: "pipe", | ||
| stderr: "pipe", | ||
| }); |
There was a problem hiding this comment.
exercisePadServer spawns the server with stderr: "pipe", but stderr is never drained/printed. If the server writes enough to stderr (or exits early with an error), this can both hide the real failure reason and potentially block the child due to a full pipe buffer. Consider either inheriting stderr/stdout for the server process, or concurrently reading stderr and including it in the thrown error when startup/WS checks fail.
|
|
||
| const localUrl = await waitForLocalUrl(proc, label); | ||
| assert(localUrl.includes("?t="), `${label}: Local URL is not tokenized`); | ||
| const withoutToken = localUrl.replace(/\?.*/, ""); | ||
| assert(await fetch(withoutToken).then((response) => response.status) === 403, `${label}: missing token did not 403`); | ||
| const editor = await fetch(localUrl).then((response) => response.text()); | ||
| assert(editor.includes("Trusted local PAD server"), `${label}: editor did not render`); | ||
| assert(editor.includes("/qr.svg?t="), `${label}: editor did not include tokenized QR`); | ||
| const qrUrl = new URL(`/qr.svg${new URL(localUrl).search}`, localUrl); | ||
| assert((await fetch(qrUrl).then((response) => response.text())).includes("<svg"), `${label}: QR endpoint did not return SVG`); | ||
|
|
||
| const wsUrl = new URL(localUrl); | ||
| wsUrl.protocol = "ws:"; | ||
| wsUrl.pathname = "/ws"; | ||
| const ws = new WebSocket(wsUrl); | ||
| const hello = await new Promise<Record<string, string>>((resolvePromise, reject) => { | ||
| const timer = setTimeout(() => reject(new Error(`${label}: WebSocket hello timeout`)), 5_000); | ||
| ws.addEventListener("message", (event) => { | ||
| const message = JSON.parse(event.data); | ||
| if (message.type === "hello") { | ||
| clearTimeout(timer); | ||
| resolvePromise(message); | ||
| } | ||
| }); | ||
| ws.addEventListener("error", reject); | ||
| }); | ||
| assert(hello.source, `${label}: hello did not include source`); | ||
| const edited = hello.source.includes("Smoke") | ||
| ? hello.source.replace("Smoke", "Smoke Verified") | ||
| : `${hello.source}\n<!-- Smoke Verified -->\n`; | ||
| await new Promise<void>((resolvePromise, reject) => { | ||
| const timer = setTimeout(() => reject(new Error(`${label}: save timeout`)), 5_000); | ||
| ws.addEventListener("message", (event) => { | ||
| const message = JSON.parse(event.data); | ||
| if (message.type === "saved") { | ||
| clearTimeout(timer); | ||
| resolvePromise(); | ||
| } | ||
| }); | ||
| ws.send(JSON.stringify({ type: "save", source: edited, reason: label })); | ||
| }); | ||
| ws.close(); | ||
| const exit = await Promise.race([proc.exited, new Promise((resolvePromise) => setTimeout(() => resolvePromise("timeout"), 7_000))]); | ||
| if (exit === "timeout") { | ||
| proc.kill(); | ||
| throw new Error(`${label}: server did not stop after client close`); | ||
| } | ||
| assert((await readFile(file, "utf8")).includes("Smoke Verified"), `${label}: save did not write to disk`); |
There was a problem hiding this comment.
On failure paths inside exercisePadServer (e.g., WebSocket hello/save timeouts or any assert), the spawned server process is not guaranteed to be terminated and the WebSocket is not guaranteed to be closed. This can leave background processes running and make CI failures flakier. Wrap the server + WebSocket lifecycle in a try/finally that always closes the socket and kills the process (or awaits exit) before rethrowing.
| const localUrl = await waitForLocalUrl(proc, label); | |
| assert(localUrl.includes("?t="), `${label}: Local URL is not tokenized`); | |
| const withoutToken = localUrl.replace(/\?.*/, ""); | |
| assert(await fetch(withoutToken).then((response) => response.status) === 403, `${label}: missing token did not 403`); | |
| const editor = await fetch(localUrl).then((response) => response.text()); | |
| assert(editor.includes("Trusted local PAD server"), `${label}: editor did not render`); | |
| assert(editor.includes("/qr.svg?t="), `${label}: editor did not include tokenized QR`); | |
| const qrUrl = new URL(`/qr.svg${new URL(localUrl).search}`, localUrl); | |
| assert((await fetch(qrUrl).then((response) => response.text())).includes("<svg"), `${label}: QR endpoint did not return SVG`); | |
| const wsUrl = new URL(localUrl); | |
| wsUrl.protocol = "ws:"; | |
| wsUrl.pathname = "/ws"; | |
| const ws = new WebSocket(wsUrl); | |
| const hello = await new Promise<Record<string, string>>((resolvePromise, reject) => { | |
| const timer = setTimeout(() => reject(new Error(`${label}: WebSocket hello timeout`)), 5_000); | |
| ws.addEventListener("message", (event) => { | |
| const message = JSON.parse(event.data); | |
| if (message.type === "hello") { | |
| clearTimeout(timer); | |
| resolvePromise(message); | |
| } | |
| }); | |
| ws.addEventListener("error", reject); | |
| }); | |
| assert(hello.source, `${label}: hello did not include source`); | |
| const edited = hello.source.includes("Smoke") | |
| ? hello.source.replace("Smoke", "Smoke Verified") | |
| : `${hello.source}\n<!-- Smoke Verified -->\n`; | |
| await new Promise<void>((resolvePromise, reject) => { | |
| const timer = setTimeout(() => reject(new Error(`${label}: save timeout`)), 5_000); | |
| ws.addEventListener("message", (event) => { | |
| const message = JSON.parse(event.data); | |
| if (message.type === "saved") { | |
| clearTimeout(timer); | |
| resolvePromise(); | |
| } | |
| }); | |
| ws.send(JSON.stringify({ type: "save", source: edited, reason: label })); | |
| }); | |
| ws.close(); | |
| const exit = await Promise.race([proc.exited, new Promise((resolvePromise) => setTimeout(() => resolvePromise("timeout"), 7_000))]); | |
| if (exit === "timeout") { | |
| proc.kill(); | |
| throw new Error(`${label}: server did not stop after client close`); | |
| } | |
| assert((await readFile(file, "utf8")).includes("Smoke Verified"), `${label}: save did not write to disk`); | |
| let ws: WebSocket | undefined; | |
| try { | |
| const localUrl = await waitForLocalUrl(proc, label); | |
| assert(localUrl.includes("?t="), `${label}: Local URL is not tokenized`); | |
| const withoutToken = localUrl.replace(/\?.*/, ""); | |
| assert(await fetch(withoutToken).then((response) => response.status) === 403, `${label}: missing token did not 403`); | |
| const editor = await fetch(localUrl).then((response) => response.text()); | |
| assert(editor.includes("Trusted local PAD server"), `${label}: editor did not render`); | |
| assert(editor.includes("/qr.svg?t="), `${label}: editor did not include tokenized QR`); | |
| const qrUrl = new URL(`/qr.svg${new URL(localUrl).search}`, localUrl); | |
| assert((await fetch(qrUrl).then((response) => response.text())).includes("<svg"), `${label}: QR endpoint did not return SVG`); | |
| const wsUrl = new URL(localUrl); | |
| wsUrl.protocol = "ws:"; | |
| wsUrl.pathname = "/ws"; | |
| ws = new WebSocket(wsUrl); | |
| const hello = await new Promise<Record<string, string>>((resolvePromise, reject) => { | |
| const timer = setTimeout(() => reject(new Error(`${label}: WebSocket hello timeout`)), 5_000); | |
| ws!.addEventListener("message", (event) => { | |
| const message = JSON.parse(event.data); | |
| if (message.type === "hello") { | |
| clearTimeout(timer); | |
| resolvePromise(message); | |
| } | |
| }); | |
| ws!.addEventListener("error", reject); | |
| }); | |
| assert(hello.source, `${label}: hello did not include source`); | |
| const edited = hello.source.includes("Smoke") | |
| ? hello.source.replace("Smoke", "Smoke Verified") | |
| : `${hello.source}\n<!-- Smoke Verified -->\n`; | |
| await new Promise<void>((resolvePromise, reject) => { | |
| const timer = setTimeout(() => reject(new Error(`${label}: save timeout`)), 5_000); | |
| ws!.addEventListener("message", (event) => { | |
| const message = JSON.parse(event.data); | |
| if (message.type === "saved") { | |
| clearTimeout(timer); | |
| resolvePromise(); | |
| } | |
| }); | |
| ws!.send(JSON.stringify({ type: "save", source: edited, reason: label })); | |
| }); | |
| ws.close(); | |
| const exit = await Promise.race([proc.exited, new Promise((resolvePromise) => setTimeout(() => resolvePromise("timeout"), 7_000))]); | |
| if (exit === "timeout") { | |
| proc.kill(); | |
| throw new Error(`${label}: server did not stop after client close`); | |
| } | |
| assert((await readFile(file, "utf8")).includes("Smoke Verified"), `${label}: save did not write to disk`); | |
| } finally { | |
| try { | |
| if (ws && (ws.readyState === WebSocket.CONNECTING || ws.readyState === WebSocket.OPEN)) { | |
| ws.close(); | |
| } | |
| } catch { | |
| // Best-effort cleanup; preserve the original failure. | |
| } | |
| try { | |
| const exit = await Promise.race([proc.exited, new Promise((resolvePromise) => setTimeout(() => resolvePromise("timeout"), 1_000))]); | |
| if (exit === "timeout") { | |
| proc.kill(); | |
| await Promise.race([proc.exited, new Promise((resolvePromise) => setTimeout(resolvePromise, 1_000))]); | |
| } | |
| } catch { | |
| // Best-effort cleanup; preserve the original failure. | |
| } | |
| } |
| async function main() { | ||
| await run(["bun", "install", "--frozen-lockfile"]); | ||
| await run(["bun", "run", "check"]); | ||
|
|
There was a problem hiding this comment.
main() runs bun install --frozen-lockfile, but the GitHub workflows that call verify:artifact/verify:release already run bun install beforehand. This causes redundant installs (extra time + extra network surface) in CI. Consider making install the responsibility of the workflow only (drop it here), or add a flag/env to skip the install when dependencies are already present.
| async function runPublishedServer(args: Array<string>, label: string) { | ||
| const proc = Bun.spawn(args, { cwd: repoRoot, stdout: "pipe", stderr: "pipe", env: { ...process.env, NOTE_PAD_NO_OPEN: "1", NOTE_PAD_IDLE_MS: "200", NOTE_PAD_FIRST_CLIENT_TIMEOUT_MS: "8000" } }); | ||
| const decoder = new TextDecoder(); | ||
| const reader = proc.stdout.getReader(); | ||
| let output = ""; | ||
| let localUrl = ""; | ||
| const started = Date.now(); | ||
| while (Date.now() - started < 20_000) { | ||
| const { value, done } = await reader.read(); | ||
| if (done) break; | ||
| output += decoder.decode(value); | ||
| const match = /Local:\s+(http:\/\/[^\s]+)/.exec(output); | ||
| if (match) { | ||
| localUrl = match[1]; | ||
| break; | ||
| } | ||
| } | ||
| if (!localUrl.includes("?t=")) throw new Error(`${label}: no tokenized Local URL\n${output}`); |
There was a problem hiding this comment.
runPublishedServer pipes both stdout and stderr, but only reads stdout. If the server writes to stderr (or exits early with an error), the underlying pipe can fill and block, and any useful error output is lost in the thrown exceptions. Consider draining stderr concurrently (and including it in error messages), or inheriting stderr for this subprocess.
| on: | ||
| push: | ||
| tags: ["v*"] | ||
| workflow_dispatch: | ||
|
|
There was a problem hiding this comment.
The Verify package version step still contains a branch that checks ${GITHUB_EVENT_NAME} == workflow_dispatch, but this workflow no longer has a workflow_dispatch trigger. That condition is now unreachable and can be removed to avoid confusion.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ab6c4a664e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const exit = await Promise.race([proc.exited, new Promise((resolvePromise) => setTimeout(() => resolvePromise("timeout"), 7_000))]); | ||
| if (exit === "timeout") { | ||
| proc.kill(); | ||
| throw new Error(`${label}: server did not stop after client close`); |
There was a problem hiding this comment.
Fail artifact verification on non-zero PAD server exit
This post-check only treats a timeout as failure, so if the spawned PAD process exits with code 1 (for example after a runtime error) the check still passes. Because this function is used by verify:artifact, the release gate can report success even when the server crashed during the smoke flow; you should explicitly assert that proc.exited resolves to 0.
Useful? React with 👍 / 👎.
| const exit = await Promise.race([proc.exited, new Promise((resolvePromise) => setTimeout(() => resolvePromise("timeout"), 7_000))]); | ||
| if (exit === "timeout") { | ||
| proc.kill(); | ||
| throw new Error(`${label}: server did not idle-shutdown`); |
There was a problem hiding this comment.
Fail published verification on non-zero PAD server exit
The published-package smoke test has the same gap: it only errors on timeout and ignores non-zero exit codes from the spawned process. If the PAD server terminates with an error after the WebSocket handshake, this function will still return success, which can let verify:published greenlight a broken published flow.
Useful? React with 👍 / 👎.
Summary
Verification