AI News Hub Logo

AI News Hub

Inside ShopPilot’s AI Content Engine

DEV Community
Sonny Levine

Every time we showed a Shopify merchant a generic AI-generated post, they spotted it instantly. Not because AI can't write — it can. Because it doesn't know them. The post would be grammatically perfect, energetic, on-trend. And it would sound like everyone else. The merchant knew it. Their customers would know it. And the post would die quietly in a feed full of posts that sounded exactly the same. That's the problem we set out to fix when we built ShopPilot's content engine. This post explains how it actually works. The first version of our content generator worked the way most AI content tools work: take a product description, feed it to the model, get a post back. It was fast. It was coherent. And it was useless. We ran a simple test: we generated 10 posts for 5 different brands — a candle shop, a fitness coach, a jewelry maker, a sustainable clothing brand, and a coffee roaster. Then we shuffled them. Three merchants out of five correctly matched a post to the wrong brand within 30 seconds. The posts were good. They were just interchangeable. The mistake was treating the generation problem as a writing problem. It's not. It's a context problem. The model has to know who this brand sounds like, who they're talking to, and what they're actually trying to say — before it generates the first word. The fix was building a brand context layer that loads before every generation call. When a merchant sets up ShopPilot, they configure four things: Voice: Playful, Premium, No-Nonsense, Bold, or Friendly — not just a tone label, but a style pattern the model is instructed to match Audience: Who they're writing for (age range, intent, familiarity with the brand) Product context: What they sell, what makes it different, what claims they can and can't make Platform: Twitter/X, Instagram, Facebook — each platform gets a different length constraint, hashtag density, and CTA pattern These aren't prompts bolted onto the front of a generation call. They're encoded into a brand profile that's embedded into every request, so the model is reasoning about voice before it's reasoning about content. The result is that when you generate a post for Luminary Gems on Instagram vs. FitLife Coach on Twitter, you get two posts that sound like they came from two completely different brands — because the context that precedes the generation is completely different. Here's what the system prompt structure looks like in simplified form: You are writing social content for [brand_name]. Voice: [voice_type] — [voice_description] Audience: [audience_description] What they sell: [product_context] Platform: [platform] — [platform_constraints] Rules: [brand_rules] Write a [platform]-native post for: [user_prompt] The brand rules field is where things get interesting. A premium candle brand can't say "cheap." A fitness coach can't make specific weight-loss claims. A jewelry shop might have a house rule about never using the word "luxury" because it reads as try-hard to their audience. These rules are applied as hard constraints in the system prompt, not soft suggestions. The model is instructed to treat a rule violation as a generation failure. Every post that comes out of ShopPilot's engine gets a confidence score before it reaches the merchant. This is probably the decision we've gotten the most questions about — most AI tools hide their uncertainty. We surface it explicitly. Here's why. The score is a composite of three signals: Off-brand risk: Does the post use language, claims, or tone that conflicts with the brand profile? A post for a "No-Nonsense" brand that starts with three exclamation points is off-brand, even if it reads well in isolation. The model scores its own output against the brand constraints and flags deviations. Factual claim density: Posts with specific numbers ("increase sales by 40%"), health-adjacent claims ("heals dry skin"), or superlatives ("the best coffee in the city") carry higher risk. We flag these because they're the claims most likely to get a merchant in trouble — either legally or just with their audience. Platform fit: A 280-character post with three hashtags fits Twitter. A post with no emojis and a 400-word caption doesn't fit Instagram. Platform fit scores how well the structural format of the generated post matches the expected conventions of the target platform. A post that hits 85%+ confidence on all three dimensions gets a green flag. Below 70%, it gets a yellow — it's not wrong, but there's something worth reviewing. Below 55%, we don't surface it at all and regenerate automatically. We show the score because merchants who can see why a post was flagged edit it better than merchants who are just told "this needs work." A yellow flag with "off-brand risk: post uses casual language inconsistent with Premium voice profile" is actionable. A vague "review before posting" isn't. We could have built full autopilot — generate, schedule, post, done. Tools like Buffer and Hootsuite give you scheduling. Some newer tools now offer auto-posting. We deliberately didn't go that route for the first version, and the reason is data. Merchants who approve posts before they go live catch about 1 in 8 posts that they'd have wanted to edit. That's a 12.5% error rate on content that the model thought was high-confidence. In the early months of a brand building its social presence, a 12.5% error rate in public is noticeable. You can't un-post a product caption that accidentally made a claim your product doesn't deliver on. The approval workflow in ShopPilot is designed to be low-friction. Posts are queued in a content calendar. One click approves. One click regenerates with feedback. The merchant sees the confidence score, the platform preview, and the generated content side by side. Average review time for a high-confidence post is under 10 seconds. Full autopilot is on the roadmap — but gated behind 30 days of approved posts for a brand. If a merchant has approved 120 posts and the model has learned their correction patterns, the error rate drops below 3%. At that point, autopilot makes sense. Before that, we think requiring a human to stay in the loop is the right call — not because we don't trust the model, but because the model needs those 30 days of feedback to actually know the brand. Three things on the near-term roadmap: Multilingual generation: A significant portion of our merchants sell across markets where English isn't the primary customer language. We're adding Spanish, French, and Portuguese as first-class voice targets — not translations of English posts, but brand-native generation in each language. A/B content variants: Instead of generating one post, generate three — same brief, different angles. The merchant picks the one that matches their read on the moment. Over time, we track which variants convert better and weight the model toward those patterns for that brand. Image generation: The words are working. The missing piece is pairing AI-generated copy with AI-generated product visuals that match the brand aesthetic. We're evaluating image model options for this — the bar is high because brand-consistent imagery is significantly harder than brand-consistent text. The easiest way to understand what the content engine actually produces is to use it. The sample generator on our demo page is live and runs the real model — same brand-first prompting, same confidence scoring, no account required. Put in your store name, what you sell, pick a voice, and see what comes out. If it sounds like your brand, sign up free. No credit card. Your first content calendar in under 5 minutes.