Unicorn Club Logo Unicorn Club
  • Newsletters
Sponsor
← Back to Newsletters

🦄 Staging said “yes”. User tickets said no...

February 4, 2026

Ship Better Interfaces

Build interfaces that stay clear when real users and real constraints show up.

Make better calls, faster with curated reads, distilled into the part you actually need: the takeaway, why it matters, and what to adopt every week.

Less churn. Stronger shipping. No filler.

unicornclub.dev

Hey again 👋

Design review is getting weird again. We’re arguing about labels and “process”, while staging looks green and support tickets tell a different story.

This week is about making work legible: collapse a two-step choice into one set of radio buttons, write a one-page “direction brief” for agent-built tasks, and add a quick incentive check before you turn a metric into a target.

Dig in! 🦄 - Adam at Unicorn Club.

Sponsored by 20i

Peak Performance WordPress Hosting, No Compromises

20i

Leave single-server hosting in 2015. Choose autoscaling Managed WordPress Hosting built for traffic surges, complex sites & demanding PHP workloads - without missing a beat.

Try 20i® now →

🏗️ Build

Make better interfaces.

Favicon

If you’re struggling to write the content, you probably have an interaction problem

This bites when a checkout design review gets stuck on radio button labels for delivery versus collection: put all options in one radio group and the content suddenly writes itself.

  • Why it matters: The trap is polishing labels to paper over a clunky step, which bloats copy and still confuses people, so redesign the radio group to remove the extra decision.

  • Try this: In your checkout flow this week, collapse a two-step choice into one radio group.

🧩 Shape

Shared foundations across teams.

Favicon

What the vibe engineering workflow tells us about the future of UX roles

‘Vibe engineering’ only stays safe if you break work into tiny tasks and write down constraints for errors and empty lists.

  • Why it matters: What catches teams out is treating a coding agent like a magic sprint, which leaves edge cases and navigation states unhandled, so write a task breakdown and review plan first.

  • Adopt this week: For one feature, write a one-page spec with constraints, edge states, and verification steps.

Favicon

AI Coding Summit (Online, Feb 26–27, 2026)

Online event (4 PM CET) covering AI-assisted software development and AI engineering, with talks and workshops on agentic workflows, code review, refactoring, and AI-assisted testing and QA.

Promo code: UNICORN (10% off tickets for AI Coding Summit ).

Favicon

The creator of Clawd: "I ship code I don't read"

The thing that changes in "reviews" is you start reviewing the prompt and the test run that covers loading and error states, not just the diff, when code lands fast.

  • Why it matters: Teams often merge agent-written changes like normal commits, then pay with a verification bottleneck and brittle releases, so capture prompts and test evidence alongside the diff.

  • Adopt this week: Add a “Prompt + verification” block to your pull request template, and require prompt text plus the test command used before merge.

    Prompt used:
    What changed:
    How verified (tests/commands):
    Risk / rollback note:

P.S. This week’s sponsor is 20i 

WordPress hosting built for traffic spikes and staying fast.

Try it for $1 →

🚀 Ship

Release, measure, iterate.

Favicon

How Product Discovery changes with AI

Worth it for reframing deployment approval: treat a production prototype of an onboarding screen as research, because desirability still needs humans even if feasibility is cheap.

  • Why it matters: Without a production prototype, teams trust staging feedback, pay with late pivots and support tickets, and this pushes you to validate desirability with real usage early.

  • First step: Ship one screen behind a flag in your release config. If activation stays flat or tickets rise over 7 days, turn the flag off and revert.

    Shipped:
    Learned:
    Risk / regression watch:
    Indicator (metric/signal):
    Decision ask: (not for scoring) Yes/No on ___
Favicon

The Cobra Effect: When Good Incentives Go Bad

Goodhart’s Law is the warning for weekly dashboard review, when a signup form conversion target becomes the goal and teams start optimising the interface for the metric, not comprehension.

  • Why it matters: Metrics turn toxic when they become targets, costing you warped behaviour and worse interfaces, so pressure-test second-order consequences before you tie bonuses, roadmaps, or praise to a number.

  • Adopt this week: For one metric, add a second-order section to its analytics doc and link it in the experiment brief for dashboard review.

    What will people sacrifice to hit this metric?
    If they sacrifice that, what happens next?

Support the newsletter

If this was useful, here are two small ways to help it travel:

🚀  Forward to a one person who builds product or ships UI

📢  Book a Sponsorship

Adam Marsden at Unicorn Club

Thanks for reading

Adam from Unicorn Club

Follow me on X or BlueSky

Connect on LinkedIn

  • Archive
  • Articles
  • Glossary
  • UI Decision Brief
  • Sponsor
  • RSS

© 2026 Unicorn Club

Curated by Adam Marsden