The 10% nobody in AI design is solving
Figma is down 49% this year. Adobe is down 30%. Lovable crossed $200M ARR and raised a $330M Series B at a $6.6B valuation. AI app builder revenue hit $4.7B in 2026 and is projected to more than double by 2027. Surface reading: AI design tools won. The actual story: none of them have solved the last 10% of the work, and that's where the money lives. Look at the last sixty days. On March 19, Google's Stitch 2.0 launched. FIG dropped 12% in two days. On March 24, Figma opened the canvas to agents with use_figma, generate_figma_design, a skills framework, and nine community skills at launch. One of them was called /sync-figma-token. They shipped the bridge everyone had been waiting for. The stock kept falling. On April 14, Mike Krieger (Anthropic's CPO) resigned from Figma's board. On April 17, Anthropic launched Claude Design, a prompts-to-prototypes tool built on Claude Opus 4.7 that reads your codebase and Figma files to extract your design system and apply it to new work. FIG dropped another 7%. Today is April 20. Everyone is reading this chart as "AI is eating Figma." Maybe. But if AI is eating Figma, what is AI replacing it with? Every tool in that timeline, from Stitch to Claude Design to Lovable to v0 to Figma's bolted-on bridge, has the same shape. You describe what you want, the tool generates, the screen looks 90% right. Then the real work starts. Make the button bigger. Warm up the green. Move that 8 pixels left. No, the other direction. A little less. You see it, then you narrate it. That's the friction. Visual work has always been visual. Figma grew because you drag. You don't describe the corner of a rectangle to your cursor and wait for the machine to move it. Prompt-to-UI reversed this. Every edit goes through English. Every adjustment is a round trip through words. It feels like progress but structurally it's a regression. The market isn't pricing "Figma doesn't have agents." Figma has agents now. The market is pricing whether any of the current tools actually solve the part that eats hours. None of them do. They all generate, and none of them adjust. Not in some noble "the future is now" sense. I sat down at my desk, opened a blank Paper file I'd named "Happy castle" for some reason I don't remember, picked a tight brief of one upgrade card and one button, and walked it end to end through Paper's MCP. Paper is a design canvas that was built to be read and written as structured data from the beginning. Every node, every style, every layer is queryable from the outside. You don't need a Dev Mode seat. You don't need a plugin. You select a frame, and the agent reads the exact computed background color, every padding value, every border radius. A Paper file isn't an image or even a design. It's a queryable object. I can ask it for the JSX of a node. I can ask it for every layer's typography in one batch. When I change the fill of a button, the new value is available a few milliseconds later to anything that knows where to look. Here is what I wrote, in order. A Paper design. Bone background, moss accent, serif price, sans-serif body. One card primitive, one button primitive, both sitting on a canvas as editable nodes. A React + Tailwind project. src/tokens.ts pulling every color, font, radius, and shadow into one object. tailwind.config.ts imports it. Card.tsx and Button.tsx consume tokens through Tailwind classes like bg-surface, rounded-card, shadow-card. The screen composes them. A .paper-sync/snapshot.json file. It maps every token and every primitive style field back to a specific Paper node ID and property. Each field declares whether it's bound to a token or a local one-off. A /paper-sync skill. Re-reads every tracked node, diffs against the snapshot, updates the minimum amount of code needed to mirror the change, writes a new baseline. Total build time: one session. Total lines of new code, not counting config: under 300. The bundle is free on my Patreon if you want to drop it into your own project: https://www.patreon.com/posts/new-skill-paper-156143156. It's the skill file, an install walkthrough, a snapshot template, and the example project as reference. Now if I want the accent warmer, I open Paper, click the button, pick a color. I run /paper-sync. Claude sees the diff, updates colors.accent in tokens.ts, tells me in plain English what moved ("accent from #4A6B36 to #5D8B3F, a brighter moss"), and rewrites the snapshot. I didn't describe the color. I picked it. Watch the sync happening (60s Loom): https://www.loom.com/share/f19bbead049d4b52827664f4811f07a1 Three honest comparisons, since you're probably asking. Paper versus Figma, as of April 20, 2026. Figma now has agent writes through use_figma, but they're bolted onto a canvas that was designed for a human designer first. The Skills framework assumes you're working inside Figma's component model. You need a Dev or Full seat plus a Claude Pro or Max plan. The MCP server runs locally and authenticates through the desktop app. It works. There's just a lot of surface area between you and the design file. Paper is flatter. The design file IS the data surface. There's no translation into proprietary component conventions. get_computed_styles returns CSS-shaped values. write_html takes real HTML with inline styles and turns them into design nodes. get_jsx returns code-ready JSX. The semantics match how you'd talk about a web UI if you were writing one from scratch, because that's what Paper is: a visual editor whose primitives are the same primitives a web developer already uses. For a solo operator or a small team, that shape is the difference between "I hack this in an afternoon" and "I set up Dev Mode, map our components, wire Claude to the desktop app, then start." Paper versus Claude Design, as of April 20, 2026. Claude Design launched three days ago, and it's the strongest of the prompt-first tools. It has a canvas alongside the chat, not just a conversation. For refinements, Claude generates purpose-built sliders for each element (color, spacing, layout) that you drag to adjust. You can click any part of the canvas and drop an inline comment requesting a targeted change. The code connection is a Claude Code handoff bundle, a structured export containing the design spec, extracted brand tokens, and component structure, which Claude Code turns into production React in about four minutes. Three limits. First, you can't grab elements on the canvas and drag them yet. Direct manipulation is on Anthropic's roadmap, roughly six months out. Second, the code connection is one-shot. You export the bundle, Claude Code generates, and that's it. If the design changes later, you export again. Third, Claude Design runs on a weekly allowance that sits on top of a Claude Pro, Max, Team, or Enterprise subscription. Every slider drag, every inline comment, every prompt is a Claude interaction that counts. Heavy visual iteration eats through the allowance, and extra usage is a purchase on top. Paper and /paper-sync go the other way. You select elements directly on the canvas, change colors with a picker, resize with handles, drag where you want. The sync is ongoing, not a one-time export. Edit the color once in Paper's desktop artboard, run /paper-sync, and the token file updates. The same run walks the mobile artboard and forces every mirror of that token to match, so desktop and mobile never drift. And the canvas work itself doesn't touch your Claude quota. Picking a color in Paper is just you moving the cursor. Only the periodic /paper-sync calls are Claude interactions, and those are minutes apart, not seconds. Paper Pro is $20 a month for a million MCP calls a week, which is effectively unlimited for a solo operator. Paper versus just using Claude Code alone. You could skip the canvas entirely. Ask Claude Code to build the project, iterate in the terminal, look at the preview, describe changes in the chat. That's the default workflow most people use today. Try it on a nuanced visual. The kind where you say "the card's a touch too loud, warm the shadow, tighten the radius by a hair." You end up writing four sentences to describe one drag. Or you take a screenshot, annotate it, paste it back, and hope the agent sees what you see. The feedback loop is slow because the input is text and the output is text-describing-visuals. The canvas is missing from the middle. Paper is that missing canvas. You see the design while you edit it. The agent sees the exact computed styles of what you're pointing at. The round trip between intent and implementation gets shorter in both directions: you adjust visually, Claude syncs structurally. Neither of you is guessing what the other means. Everyone thinks the problem with AI design tools is that they're not good enough at generating. The real problem is what happens after generating. Generation is a one-shot event. Iteration is what consumes the hours. Prompts can't adjust. They can only regenerate, and regenerating is lossy. The companies priced for AI-design-tool dominance won't be the ones with the best prompt interface. They will be the ones that figured out what happens in the twenty minutes between "looks 90% right" and "ready to ship." That gap is the market. Look at the bodies on the road. CodeParrot tried to solve this from the Figma side and shut down in July 2025. The YC-accepted pitch was Figma designs converted to frontend code using AI. The generated code wasn't reliable enough for production. Teams kept having to fix the output by hand. Builder.ai, once valued at $1.2 billion, filed for bankruptcy in May 2025 after promising anyone could build an app without writing code through its assistant Natasha. Series A-stage shutdowns jumped from 6% to 14% of all closures in 2025, a 2.5x increase over the prior year. The pattern is the same: GPT plus a prompt plus a nice UI. No moat on the adjustment loop. The moat is the adjustment loop. I'm not going to predict who wins this race. Figma is moving, Claude Design is moving, others will. A GitHub repo called lifesized/figma-design-sync already does token-bound design-code sync on Figma's side. The shape is converging from multiple directions, which tells you it's real. What I will say: the winner won't be whoever shipped the MCP first or built the best prompt. It'll be whoever treats the design file as native structured data, not as a canvas with an agent layer on top or a chat with design-system awareness. That's a different architecture question, and the incumbents carry the bigger backlog. Paper doesn't have the users Figma has. It doesn't have the enterprise contracts. It has something else: a design file that was built to talk to agents from day one. That shape is cheaper to build a sync loop on top of than retrofitting years of proprietary canvas format or wrapping the generation step in a closed product. The right shape is cheap to build once you see it. I built a version of it in an afternoon.
