Open-source multi-agent pipeline: 61K Python, 12 agents, 5 quality gates...
I spent the last month building an open-source (MIT) pipeline that takes a plain-language idea and runs it through 12 specialized agents — analyst, PM, architect, design critic, developer, QA, security, DevOps, marketing, and more — with 5 quality gates, a strict state machine with recovery, and an AI Director that autonomously manages the whole thing. LLM failover creates consistency problems State machines need to survive the model being wrong Recovery fallback: if JSON parse fails, restore from SQLite snapshot Stranded product recovery: products stuck in pm_quality_fail because the model hallucinated a non-existent file path Async save with timeout guards so a slow disk write doesn't block the pipeline The Director AI feedback loop problem new_idea, product_feedback, or general_directive via LLM. If it misclassifies "fix the login page" as new_idea, you get a duplicate product instead of a bug fix. I added an orphan feedback heuristic: if a message mentions a product name that doesn't exist yet, route to new_idea; otherwise link to the existing product. Quality gates — what I wish I'd built first Real example: visual QA flagged a white-on-white CTA button — the model generated color: white on background: white assuming a dark theme that wasn't applied. The gate caught it, sent it back to the developer with the exact CSS selector. Fixed next cycle. Preview fidelity is pure web engineering When AI-generated code runs in a sandbox iframe, every web platform quirk amplifies: relative URLs break, is missing, CSP blocks inline styles, `target="_top"` kills navigation. Had to write a dedicated URL rewriter that: injects pointing to the correct sandbox route, rewrites absolute / links to relative, adds permissive CSP headers, strips target="_top". Not AI work. But without it, the preview is broken and users blame you, not the LLM. 61,503 Python LOC, 22,997 TypeScript/TSX LOC 12 specialized agents, 5 quality gates 11 pipeline states, 34 valid transitions 6+ LLM providers with auto-failover 72 test files, MIT licensed Repo: github.com/alexar76/aicom — FastAPI + Next.js + Docker Compose, self-hosted, MIT, BYO API keys.
