AI News Hub Logo

AI News Hub

Building Expo Architect: A PWA-First LLM Configurator with Expo SDK 55

DEV Community
Harish Kotra (he/him)

Why build this Expo Router solved a lot of app structure complexity, but configuration still slows teams down. You still need to correctly wire permissions, splash colors, orientation, and platform-specific fields. Expo Architect compresses that setup loop into one flow: describe what you want in plain English, generate a valid Expo app.json grounded by Expo docs, preview it instantly, send it to your inbox. The strategic idea is simple: if docs are already LLM-readable, developer experience can become conversational. Expo SDK 55 Expo Router tabs + API routes React Native + react-native-web (PWA-first) OpenAI / Anthropic Resend use dom for rich web preview src/app/(tabs) # Architect, Preview, Raw src/app/api # config+api.ts, send+api.ts src/components # VoiceButton, LivePreview, WebSidebarLayout src/state # shared config store src/utils # validation + secret helpers The Architect tab supports both typed prompts and browser speech recognition (web-first). For a demo, this is perfect: voice gives immediate “AI-native” signal, text keeps reliability for noisy environments. // src/app/(tabs)/index.tsx const response = await fetch('/api/config', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ prompt: prompt.trim() }), }); const payload = (await response.json()) as GenerateConfigResponse; setGeneration(payload.config, payload.source); router.push('/preview'); The core decision was to ground every generation request with https://docs.expo.dev/llms-full.txt. That keeps output anchored to current Expo conventions instead of generic RN guesses. // src/app/api/config+api.ts const docs = await fetch('https://docs.expo.dev/llms-full.txt', { cache: 'no-store' }).then((r) => r.text() ); const systemInstruction = ` You are Expo Architect, an expert Expo SDK 55 configurator. Return only JSON with shape { "expo": { ... } }. Set web.output to "server" so API routes work. `; Then we call OpenAI first, Anthropic as fallback. If neither key is present, API returns a clear 500 with setup instructions. LLM output is treated as untrusted input. Validation enforces required shape and defaults. // src/utils/validate-config.ts if (!parsed || typeof parsed !== 'object' || !parsed.expo) { throw new Error('Model output must include a top-level expo object.'); } const slug = parsed.expo.slug ?? toSlug(parsed.expo.name); const next: GeneratedAppJson = { expo: { ...parsed.expo, slug, orientation: parsed.expo.orientation ?? 'portrait', web: { bundler: 'metro', output: 'server', ...parsed.expo.web }, }, }; This is where production readiness starts: generation quality is useful, but deterministic post-processing makes it dependable. The Preview tab uses a use dom component for fast, expressive UI on web while staying in Expo Router. // src/components/LivePreview.tsx 'use dom'; {appName} {slug} This gives a “design checkpoint” before you run prebuild or native builds. The Raw tab is intentionally plain: show exact generated JSON and provide a one-click copy action. Developer trust increases when they can inspect source-of-truth output directly. The send route receives { to, config }, renders a styled HTML email, and sends through Resend. // src/app/api/send+api.ts await resend.emails.send({ from, to, subject, html: renderEmail(config), }); This makes the demo end with a concrete artifact in inbox, which is strong for DevRel storytelling. For this specific goal (hiring demo), PWA-first is faster to ship and easier to distribute: no provisioning or store pipeline, instant reviewer access via URL, still demonstrates Expo architecture and platform portability. You can always follow up with iOS/Android runs to prove cross-platform continuity. strict JSON schema validation with richer error messaging streaming progress UI (“Fetching docs”, “Calling model”, “Validating”) template presets for common app archetypes config diff mode against a pasted app.json “export to app.config.ts” support telemetry dashboard for prompt quality and generation failures Expo Architect shows a practical pattern for AI + DX products: Ground generation with source docs. Validate aggressively. Keep output transparent. Close the loop with delivery. That pattern is reusable for CLIs, starter generators, migration assistants, and docs copilots across the Expo ecosystem. Github Repo: https://github.com/harishkotra/expo-architect