The quick version
This is my “trust contract” for Qwayk: how I draft things, how I verify them, and what I consider proof. It’s here so you don’t have to guess whether something is a real end-to-end run or just a plan.
If a page/post/tool says it’s verified, you should be able to see:
- what I ran (redacted)
- what changed (receipt)
- what I checked after (verification)
What this is (and isn’t)
This is:
- a process description (how I try to avoid mistakes)
- a checklist for what “verified” means on Qwayk
This isn’t:
- a guarantee that nothing can go wrong
- a promise that every API/tool stays stable (third parties change things)
- security advice for your whole company
Risk card
What can go wrong
- You trust an automation flow that was never verified end-to-end.
- You copy/paste a command that writes to production when you meant to dry-run.
- Docs drift: the tool changed, the API changed, and the example is now wrong.
Permissions / scopes (least privilege)
- Use the smallest scope that can do the job (read-only first when possible).
- If you can, use a test site / staging environment to prove the workflow before touching production.
- Avoid “admin of everything” keys unless the action truly needs it.
- Not required: “give the agent everything” keys, broad account admin, or long-lived tokens sitting in prompts.
The safe steps (do this every time)
- Dry-run (plan): run the command without writes.
- Review the plan (you, a teammate, or Codex).
- Apply only when you’re sure:
--apply(and--yesfor risky actions). - Verify: re-fetch, and when it fits, re-run a dry-run and confirm it shows 0 changes.
- Save the receipt (audit trail).
What I recommend if you’re not sure
- Stop and run a read-only command first.
- Narrow scope (use explicit IDs, smaller batches).
- Treat “I’m not sure” as a signal to switch to staging, or do it manually.
Proof / verification
Proof artifacts I like: a plan (dry-run), a receipt (after apply), and a read-back check that matches.
To see when this page was last verified, use the Trust box at the top of the page. I keep the date there so it doesn’t drift inside the body text.
Here’s a real example from the Qwayk Ghost site build (redacted):
Plan (dry-run):
{
"tool": "ghost-api-tool",
"target": { "type": "page", "slug": "tools" },
"risk_level": "medium",
"proposed_changes": [
{ "field": "meta_description", "before": null, "after": "Safe-by-default API CLIs for AI agents..." }
],
"verification_plan": { "type": "read_back_or_idempotence" }
}
Receipt (after apply):
{
"ok": true,
"changed": true,
"verification": { "ok": true }
}
How writing works (repo-first)
Most Qwayk content starts in this repo:
1) I draft the page/post as Markdown. 2) I link it from the relevant place (so it’s discoverable in the repo). 3) I register it in the tracking ledgers (so it doesn’t disappear in my brain). 4) Later: I publish it in Ghost, then fill in the live URL and set a truthful last_verified date when I’ve actually run the workflow end-to-end.
This keeps “what I meant” and “what I shipped” in one place.
How verification works (what I mean by “verified”)
When I say something is “verified”, I mean I did more than read docs and make a plausible guess.
Plans (dry-runs)
A plan is a dry-run output that answers:
- what would change
- what it would touch
- what it refuses to do unless you confirm
If a tool is about to write, a good plan is the cheapest way to catch mistakes early.
Receipts (after apply)
A receipt is the audit trail after an apply. It should capture:
- what I intended to change
- what actually changed (or what failed)
- what I verified afterward
- what got redacted (so secrets don’t leak)
Read-back verification
After a write, “verify” usually means: re-fetch the resource and compare expected vs observed.
If the tool can’t re-fetch (or the API doesn’t support it cleanly), I don’t pretend it was verified. I call that out.
“Run again” check (when it applies)
For some actions, there’s a strong extra check:
- apply once
- dry-run again
- confirm the plan says “0 changes”
That doesn’t fit every API action, but when it fits, it’s one of the best signals we have.
How you can keep me honest (quick checks)
Before you trust a workflow, look for:
- A Trust box that shows a real verification date (not hand-wavy).
- Proof artifacts mentioned in the post/tool docs (plan/receipt/tests/screenshots/commit link).
- A clear separation between “policy/process” vs “this was run end-to-end”.
If you’re about to do something risky, the safest pattern is still: dry-run → review → apply → verify → save the receipt.
What “verified” means on Qwayk
On Qwayk, “verified” means: I re-checked that the workflow described here still matches reality (the tool behavior, the docs, and any example outputs I’m showing).
It is not:
- the publish date
- a promise that the underlying vendor API won’t change tomorrow
Two different states matter:
- Policy reviewed: I’m confident the safety rules and the process are written correctly.
- Verified end-to-end: I actually ran the workflow, captured proof artifacts, and did read-back verification.
For drafts, I’ll either leave it blank in the tracking ledger or say “not yet (draft)”.
Proof artifacts policy (what counts)
When I say something is “verified end-to-end”, you should be able to point to at least one real artifact set (redacted as needed):
- a plan (dry-run output)
- an apply receipt (what happened)
- a verification note (read-back and/or “run again” check)
- optionally: a short screenshot or a commit link that anchors what changed
Corrections policy
If something here turns out to be wrong (or gets outdated):
- I’ll add a short “Correction” note near the top explaining what changed and why.
- I’ll update the page content.
- If I re-verified it end-to-end, I’ll update the dates in the Trust box.
- If it can’t be verified anymore (API change, missing capability), I’ll say that plainly.
If something looks wrong
Stop before you apply.
If you want help, send:
- the tool name + version
- the command you ran
- the redacted plan/receipt output (preferred over raw logs)
Please don’t send API keys or tokens. I don’t need them.