Detailed Roadmap
Every phase, every task, every blocker. The master plan behind the dashboard.
Working name: Sentinel ๐ก๏ธ โ the unified ops platform vision for monitoring every Coreshift HQ app from one place. Phase 0 product: KeyContent Maintenance Ops (currently the only app on Sentinel).
Last updated: 2026-05-11 Owner: Abe (operator) ยท Claude Code (implementer) North Star: Catch bugs before users do ยท Ship fixes in hours ยท Retire email-support workflow ยท Scale to every Coreshift HQ app
Current Status
| Item | Status |
|---|---|
| Phase 0 โ In-app Report Issue widget | โ SHIPPED on staging.keycontent.ai |
| CEO Pitch deck | โ
Ready (deck/KeyContent-Maintenance-Ops.pptx) |
| CEO Pitch date | Monday, May 18, 2026 |
| Phase 1 โ Full Maintenance Ops playbook | โณ Pending CEO approval |
| Phase 2 โ Widget enhancements + auto-replies | ๐ Planned |
| Phase 3 โ Clone to other Coreshift HQ apps | ๐ Future |
This Week โ Pre-CEO Pitch (by Mon, May 18)
Tasks that strengthen the Monday pitch.
Promote V0 from staging to production
- Deploy
report-issueEdge Function to the production Supabase project - Set
POSTMARK_SERVER_TOKEN+REPORT_TO_EMAILas secrets on production Supabase - Run storage migration on prod โ
bug-report-screenshotsbucket + RLS policies - Verify Edge Function CORS allowlist includes the production origin
- Confirm Postmark is sending from a live server (not sandbox) with verified sender
- Merge staging branch to main โ confirm Cloudflare deploys to production
- End-to-end smoke test on production (Bug + Suggestion + screenshot)
Pitch-strengthening
- Set up inbox rules for
[KeyContent ยท BUG]/[ยท SUGGESTION]/[ยท QUESTION](filter, flag, route) - Build at least one "Coreshift HQ Way" artifact to attach to the pitch โ proof of elevation
- Optional: short dry-run of the pitch (15 min, with timer)
"Coreshift HQ Way" Artifacts (Elevation Layer)
These are the small playbook docs that turn industry standards into your version of those standards. Each one is ~1 page.
-
HOW-WE-DO-PRIORITY.mdโ P0โP3 + user-impact narrative + blast radius -
HOW-WE-DO-PR-REVIEWS.mdโ non-coder behavior-based PR review checklist -
HOW-WE-DO-BUG-REPORTS.mdโ the widget + email pipeline + inbox routing (codifies what already shipped) -
HOW-WE-DO-DEPLOYS.mdโ preview deploy โ test โ gradual rollout โ monitor -
HOW-WE-DO-INCIDENTS.mdโ alert โ diagnose โ fix โ status page โ post-mortem
Recommended order: Priority โ PR Reviews โ Incidents โ Bug Reports โ Deploys.
Phase 1 โ Full Maintenance Ops Rollout (5 weeks, post-CEO approval)
The 5-week plan from the pitch deck. Week 3 already shipped.
Week 1 โ Foundation
- GitHub Issues โ labels (
bug,enhancement,question,tech-debt,user-reported) + priority labels (P0โP3) - Issue templates (bug, feature, question)
- GitHub Project board (Triage โ To Do โ In Progress โ In Review โ Done)
- Sentry account + Sentry project for KeyContent
- Sentry SDK installed in the app (frontend + backend)
- Sentry release tracking wired to Cloudflare deploys
Week 2 โ Visibility
- Better Stack account + uptime monitor for
keycontent.ai - Public status page at
status.keycontent.ai - Cloudflare alert integration (Workers + Pages health)
Week 3 โ User Channel โ DONE
- V1 Report Issue widget shipped (May 11, 2026)
- Round 2 polish + ticket-type selector shipped
Week 4 โ Workflow
- Briefing templates for Claude Code (one per ticket type)
- PR review checklist published in repo
- Morning triage routine documented (10-min ritual)
- Triage rhythm locked in (when to look at Sentry, Better Stack, GitHub Issues)
Week 5 โ Documentation
-
PRIORITY.mdin the keycontent repo (in-repo summary of priority framework) - Incident runbook in the repo
- Operator onboarding doc (in case a second operator joins)
Phase 2 โ Widget Enhancements (post-rollout, ~3-4 weeks of work)
The widget shipped in Phase 0 grows outputs without a frontend rewrite.
- Add
bug_reportstable in Supabase โ Edge Function writes a row alongside sending the email - Auto-create a GitHub Issue from each report (mapped: Bug โ
buglabel, Suggestion โenhancement, Question โquestion) - Auto-reply email to the user when their issue gets resolved (closes the loop, builds trust)
- In-app "My past reports" view for users (transparency upgrade)
- Rate limiting if abuse becomes a concern (only if needed)
Phase 3 โ Scaling Across Coreshift HQ (when launching the next app)
- Document the cloning playbook (
PLAYBOOK-cloning-to-new-app.md) - First clone โ apply the system to Coreshift HQ App #2 when it launches
- Federate Sentry org / Better Stack workspace across the portfolio
Phase 4 โ Sentinel (the unified ops dashboard, ~when at 2-3 apps)
The single internal platform that aggregates every app's bug reports, alerts, and triage state into one queue. Internal-only โ not customer-facing.
Forward-compatible decisions (make NOW, even at one app)
- Add
app_id(orapp_slug) field to every bug report payload โ even though only "keycontent" exists today, the schema is ready for "coreshift-app-2", etc. - When the
bug_reportstable is created in Phase 2, includeapp_idas a column from day one - Edge Function payload accepts and validates
app_id(default to "keycontent" for now)
Build phase (when ready)
- Decide architecture: centralized Sentinel Supabase project OR federated reads from each app's Supabase via API
- Build the Sentinel web app โ internal dashboard with triage queue, filters (by app, type, priority), resolution flow
- Auth: Coreshift HQ team only (no public access)
- Optional: federated views into Sentry + Better Stack + GitHub Issues across the portfolio
Operational Maintenance (ongoing)
- Monthly: review issue trends โ are the same kinds of bugs recurring?
- Quarterly: review free-tier usage vs limits (Sentry events, Resend emails, etc.)
- Quarterly: prune stale issues, close P3s that have aged out
- Annually: re-evaluate tool choices (Sentry vs alternatives, Better Stack vs alternatives)
Notes / Constraints
- Budget: Free tiers only for new tooling at current scale (~10-20 users). Upgrade triggers documented in deck Slide 11.
- Role split: Abe triages + briefs + verifies. Claude Code implements. Tools watch + alert + record.
- Stack: GitHub (
keycontent), Supabase (staging + prod), Cloudflare (Workers/Pages/D1/KV/R2), Postmark (transactional email). - All artifacts live in:
D:\Coreshift HQ\Sentinel\