Fake Door Testing: How to Measure Demand Without a Product
TLDR
Fake door testing displays real pricing or features before a product exists, then measures whether buyers take action. The click itself — which pricing tier, which feature — is the validation signal. Buffer sold annual plans before writing code. Superhuman collected $299 intent before launch. You don't need to go that far: tracking pricing tier clicks is enough to measure willingness to pay.
Why Fake Door Testing Works
Surveys ask people what they would do. Fake door tests observe what they actually do.
The gap between survey intent and real behavior is well-documented. When asked “would you pay $9/month for this?”, most people say yes — it’s easier than saying no to someone who’s excited about their idea. When asked to actually click “Get Started” on a pricing page, far fewer do. That gap is the data.
Fake door testing removes the courtesy bias from validation. The person who clicks your $29/mo Pro tier is making a behavioral statement: at this price, I’d pay. That’s more reliable than any survey answer.
Real Examples
Buffer (2010)
Joel Gascoigne described the original Buffer fake door test in his 2011 blog post. He built a landing page describing Buffer’s concept, drove some Twitter traffic to it, and measured interest via email signups. Then he added a pricing page — two tiers, real prices — before writing any backend. When people clicked the tier and entered their email, he emailed them directly to say the product wasn’t ready.
After enough clicks confirmed demand at real prices, he built the product. The fake door test ran for a few weeks. The product took months to build. He built what people wanted, not what he guessed they’d want.
Superhuman (2017)
Superhuman collected $299 upfront from a waitlist of email power users before the product launched publicly. Each person who paid was added to an onboarding queue and interviewed before getting access. The $299 payment was the fake door — it filtered out casual interest from genuine buyers and funded early development. Unlike a typical fake door that shows an interest form, Superhuman actually charged $299 upfront — making it the strongest possible signal of genuine purchase intent, not just curiosity. Superhuman used this to build direct relationships with their most valuable early customers.
Robinhood (2013)
Robinhood launched a waitlist page before building anything. Their hook: commission-free stock trading. The waitlist gamified position — “invite friends to move up the list.” Robinhood’s founders reported the waitlist reached 1 million people before launch, a figure they shared publicly at the time. Each signup was a fake door: someone expressing intent before anything existed. That list became their launch audience.
Setting Up a Fake Door in Practice
You need four components:
A pricing section on your landing page. Display real tiers with real prices. Each tier should have a name, a price, 3-5 bullet points of features, and a CTA button. The CTA text matters: “Get Started” and “Start Free Trial” outperform “Join Waitlist” because they feel more like real product CTAs.
Click tracking. When a visitor clicks a CTA, write a record to your database before redirecting: which tier, which page, timestamp, and optionally the session ID to correlate with later email capture. Don’t rely on Google Analytics for this — analytics tools aggregate and sample. You want a raw record of every click.
A waitlist redirect page. After recording the click, redirect to a page that acknowledges what they did and explains the situation. Make the redirect feel intentional, not broken. “You’re interested in the Pro plan — here’s where things stand” converts better than a generic “Coming Soon” page.
Email capture on the redirect. Offer a specific benefit for leaving their email: early access, a discount, a free trial extension. Generic “stay informed” copy converts poorly. Specific benefits convert at 2-3x the rate.
import InlineSignup from ‘@validation/ui/components/inline-signup.astro’;
Interpreting Your Results
After 30-60 days, you have click data. Here’s how to read it:
| Pattern | What it means |
|---|---|
| No clicks on any tier | Pricing is too high, or the copy before pricing didn’t convince |
| Clicks on all tiers roughly equal | Buyers aren’t differentiating on features — pricing may be too close together |
| Clicks heavily concentrated on one tier | That’s your market’s preferred price point and feature set |
| High click-to-email conversion (>40%) | Waitlist page is working; the offer is compelling |
| Low click-to-email conversion (<15%) | Waitlist page isn’t working; the benefit isn’t specific enough |
The most actionable output is which tier attracted the most clicks. If 70% of clicks went to your mid-tier ($29/mo Pro), your market is telling you that’s the right price point. Build the product anchored around that tier.
Kill Criteria for Fake Door Tests
Set these before you run the experiment:
Kill the idea if:
- Fewer than 1% of landing page visitors click any pricing tier after 100+ visitors
- Zero tier clicks after 30 days
Continue building if:
- 3-10% of landing page visitors click a pricing tier
- At least one tier has a clear concentration of clicks
Accelerate:
- Above 10% pricing click rate
- Multiple visitors follow up asking when they can use the product
import DefinitionBlock from ‘@validation/ui/seo/definition-block.astro’; import AnswerBlock from ‘@validation/ui/seo/answer-block.astro’;
Q&A
What is a fake door test?
A fake door test presents a real call-to-action — a pricing page, a feature, a signup button — before the product or feature exists. When a visitor clicks, their action is recorded and they're redirected to a waitlist or 'coming soon' page. The click is the validation data: it reveals that someone wanted that thing enough to act on it, without you having to build it first.
Q&A
Is fake door testing ethical?
Yes, with one condition: you must be transparent. The redirect page should clearly explain that the product isn't available yet and that they've joined a waitlist. What's not okay is taking payment for something you haven't built with no intention of building it. Collecting intent (clicks, email signups, waitlist entries) without taking money is standard practice across the industry.
Q&A
How do you interpret fake door testing results?
High click rate + low email conversion: the CTA copy is working but the waitlist page isn't. Low click rate overall: pricing is too high, or the copy before the pricing section didn't convince. Clicks concentrated on one tier: that's your market's preferred price point. No clicks on any tier after 100+ visitors: the fundamental value proposition isn't landing.
Q&A
What did Buffer do with fake door testing?
Joel Gascoigne launched Buffer's pricing page before writing any backend code. He displayed three pricing tiers, drove traffic to the page, and when visitors clicked a tier, they were shown a 'Thanks for your interest — we're not ready yet' message. After enough clicks confirmed willingness to pay at real prices, he built the product. Gascoigne described the experiment in his 2011 blog post; the click data on his pricing page confirmed willingness to pay before he wrote any backend code.
Like what you're reading?
Try Validea free — no credit card required.
Want to learn more?
Should I take payment in a fake door test?
How many pricing tier clicks do I need to validate demand?
What should my waitlist redirect page say?
Can I run a fake door test on a feature inside an existing product?
Keep reading
6 Best Idea Validation Tools for Solopreneurs
We compared 6 idea validation tools on speed to signal, cost, and how well they measure real demand — not just interest.
Best v0.dev Alternative for Validating SaaS Ideas
v0.dev generates beautiful UI but has no pSEO, no idea validation engine, and no deployment story. Validea is built for founders who need organic traffic and conversion signals, not just a good-looking component.
Landing Page Conversion Benchmarks for Idea Validation
What conversion rates to expect from a validation landing page. Covers cold vs warm traffic, email capture optimization, and when to kill vs continue.