19 Days, 3 Clients, 1 Engineer: A Field Report on AI Pair Programming
One person, 19 days, three clients (web, Chrome extension, Electron): that’s how PromptHub shipped. This is a no-fluff retrospective on how I treated AI as a “pair programming squad.”
0. Context & Pace
- Scope: Deliver web + Chrome extension + Electron apps; backend on Next.js API routes, SQLite first, PostgreSQL-ready anytime.
- Cadence: Kicked off Sept 17, shipped MVP in 19 days. AI handled scaffolding + repetitive glue; I handled decisions, verification, and debugging.
1. Let AI Build the Scaffolding, Keep Decisions Human
Instead of sweating over package.json, tsconfig, auth flows, or billing setup, I gave AI explicit engineering orders:
Next.js + TypeScript project
Drizzle ORM + SQLite
JWT + Google/GitHub OAuth
Stripe billing with ready-to-wire hooksHalf a day later a runnable skeleton dropped.
Takeaway 1: In early phases AI’s biggest value is “removing friction” so you can focus on business logic.

2. Atomic Tasks: Treat AI as a Team of Specialists
“Vibe coding” doesn’t scale. I split features into atomic tasks and ran multiple conversations in parallel:
| Role | Prompt | Output |
|---|---|---|
| DB Specialist | “Design prompts table with Drizzle syntax” | Schema + migrations |
| Backend Specialist | “Implement CRUD API + auth based on this schema” | Next.js API routes |
| Frontend Specialist | “Hook these APIs into React + Tailwind admin UI” | Components |
Benefits: isolated context, single responsibility, parallel throughput.
Takeaway 2: Don’t treat AI as a single genius; treat it as a panel of domain experts.

3. Three Pitfalls Proving AI Still Can’t Replace Decisions
- Database consistency – Turso’s async nature clashed with my strong-consistency needs; switched back to Supabase. CAP trade-offs require human judgement.
useEffectinfinite loop – AI kept cycling. I manually audited dependencies, kept onlyuser?.personalSpaceId, fixed it, and then fed the correct pattern back to the model.- Chrome extension permissions –
content.jswouldn’t load,localStoragewouldn’t sync. Only after reading docs abouthost_permissionsvsscriptingdid the issue unlock.
Takeaway 3: AI is great at “implementation,” weak at “decision + debugging,” especially near platform or performance limits.

4. Model Toolbox (Use the Right Tool per Task)
- Gemini 2.5 Flash – architecture decisions & weird hydration bugs.
- Claude 4.1 – UI/UX polish when CSS matters.
- Qwen3 Coder Plus – CRUD grunt work.
- Kilo + Gemini – data migration scripts, auto-parsing JSON and spitting Python.
Mixing models beats going all-in on one. Assign them roles.

5. Closing Thoughts: Extreme Human + AI Pairing
AI played the lightning-fast intern; I:
- Specified problems precisely.
- Made judgement calls under uncertainty.
- Designed the system and owned quality.
Developer value is shifting from “typing every line” to “defining problems, crafting systems, iterating fast.” These 19 days? A rehearsal of that future.
- Try the project: https://prompt.hubtoday.app/
- Want to swap workflows? Ping me on WeChat
justlikemaki