Most failed startups didn't fail because of bad code. They failed because they built something nobody wanted.
Validation is the process of testing whether a problem is real, whether people will pay to solve it, and whether you're the right person to solve it — before you invest months building a product. Done right, validation takes 2–6 weeks and saves you from 6–18 months of wasted effort.
Here's the framework we use at IdeaRunway, and the one that underlies our AI scoring pipeline.
Vague ideas fail. "Better project management" is not a business — "project management specifically for construction subcontractors who need to track daily progress logs and weather delays for insurance claims" is.
Your problem statement should answer:
If you can't answer these questions specifically, you don't understand the problem well enough yet.
This is the step everyone skips. Don't.
You don't need a survey. You need 10 real conversations with people who have the problem you think you're solving. Ask:
What you're looking for: Do people describe the problem without you prompting them? Do they light up when you describe a solution? Do they ask "when can I use it?"
If 7 out of 10 people shrug and say it's "not really a problem," that's signal. If 8 out of 10 say "oh god, I've been trying to fix this for years," that's also signal — the right kind.
People often skip this because they're afraid of finding competitors. Don't be. Competition means there's demand.
How to audit:
What good competition looks like: Expensive tools with bad UX, enterprise software ignoring SMBs, outdated incumbents with no recent updates, tools that solve 70% of the problem but not the core job.
What bad competition looks like: Well-funded startups with strong product-market fit and a loyal user base. You can still compete, but you need a sharper wedge.
Before building, quantify the opportunity. IdeaRunway's AI does this automatically across 5 stages, but you can do a basic version manually:
| Signal | Score (1–10) | |--------|-------------| | Problem frequency (daily = 10, monthly = 3) | | | Willingness to pay (enterprise = 10, consumer = 3) | | | Market size (large = 10, niche = 4) | | | Competition level (blue ocean = 10, saturated = 2) | | | Your unfair advantage (deep domain expertise = 10, none = 2) | |
If you score below 30/50, think hard about whether this is the right idea.
This is the fastest signal. Before writing a line of code, test whether people will actually do something:
Options (choose one):
The goal isn't to build a perfect prototype — it's to answer "will anyone give me money for this?" before you spend months finding out.
Validation is only useful if you define what "validated" means before you run the test. Otherwise, you'll rationalize weak results.
Good success metrics:
Set your threshold before you start. If you hit it, build. If you don't, iterate the problem statement or move on.
Our AI scoring pipeline runs every idea through:
When you browse ideas on IdeaRunway, you're looking at ideas that have already passed steps 1–5. The free tier shows you the problem, market, and verdict. Unlock full access to see the complete validation breakdown, competitor analysis, and ready-to-use MVP prompt.
If your tests don't hit your success metric:
Often, a failed validation points to a refinement opportunity, not a dead end. The problem is real but the customer segment, use case, or pricing model needs adjustment.
Browse validated SaaS ideas → — every idea on IdeaRunway has passed our AI validation pipeline before you see it.