Three in four new products fail within their first years on shelves. Not because companies skip product validation. Because they validate the wrong thing, at the wrong stage, with the wrong method.
The typical product validation process looks like this: extensive focus groups during idea generation, one concept test before development begins, a home-use test before launch. Each gate costs $50,000 to $500,000. By the time you have enough validation data to commit, you have spent millions without actually confirming market demand at scale.
The economics have changed.
- Mars Wrigley validated national demand for a new product concept on TikTok Shop for $5,000 to $10,000 before retail rollout.
- Unilever simulates thousands of formulations digitally for $5,000 to $10,000 each instead of $200,000 in physical lab testing.
- SharkNinja tests near-final products in 750 real homes and generates 200+ product changes before any manufacturing commitment.
These are not outliers. They reflect a systematic rethinking of when to validate, what to validate, and which product validation technique to use. This article gives you the framework to apply the same logic to your own product development process.
Why the traditional product validation process fails
The five trends map across governance, people and technology, investment strategy, experimentation, and portfolio management. Each represents a departure from how R&D management has traditionally worked. Each demands attention from decision makers now, and implementing these trends effectively requires a structured approach.
That constraint no longer exists. And the following shifts broke the old model (Exhibit 1).

Exhibit 1: The three shifts breaking the old model
The companies capturing market share are not eliminating their stage-gate processes. They are committing to validation at every stage. Traditional economics forced rationing - one expensive test before each gate. Cheaper validation enables testing earlier, more often, and with real users rather than controlled samples.
The result: 5 priorities instead of 5,000 signals. 95% of early ideas are killed at $10,000, not $1,000,000. Three validated builds instead of fifteen unvalidated bets.
The product validation framework: match method to uncertainty
Most product development teams pick product validation techniques based on what they have always done, or what the budget allows. This is the wrong starting point.
The right question is: what is the single core assumption that could sink this initiative?
Every product concept has one killer uncertainty.
-
Is there a genuine market need for this?
-
Will users actually use it the way we expect?
-
Which variant will win?
-
Does this category have proven demand?
Thus, name that assumption first. Then select the product validation method that de-risks it fastest and at the lowest cost.
The three-step framework below applies to any product development team working on physical products, digital tools, or services.
Step #1: Idea generation and opportunity classification
Before any product validation process begins, classify what type of opportunity you are pursuing. The type determines what kind of uncertainty dominates - and which validation method will actually reduce it.
Convergence opportunities
They combine existing behaviors in a new way. The killer uncertainty is whether the combination creates genuine customer value. Consumers cannot articulate demand for things they have not imagined.
This means user interviews and customer conversations during idea generation will not surface the insight.
An example: SharkNinja found this when observing 100 consumers vacuum their homes. Eight people grabbed scissors and cut hair off the brush roll. When asked if they wanted any changes, all eight said no. That behavioral gap - between what users do and what they say in customer feedback - became the self-cleaning brush roll.
Cyclicality opportunities
They refresh existing products with new formats for a target market. The killer uncertainty is which variant wins. Demand exists. The question is format and messaging.
An example:Mars Wrigley's Skittles Pop'd took a legacy candy into freeze-dried format and launched two variants simultaneously to let the market decide.
Acceleration opportunities
They optimize performance within known parameters. Unilever improving detergent formulations. The killer uncertainty is technical: which of thousands of formulations performs best?
This is where AI simulation produces more valuable insights per dollar than any other product validation technique.
Divergence opportunities
They target niche user segments with identity-driven products. The killer uncertainty is whether the niche is large enough and loyal enough to sustain a business.
Algorithmic platforms reveal this through actual purchase behavior from early adopters, not stated interest from focus groups.
Redirection opportunities
They reframe existing products for new use cases. The killer uncertainty is adoption: will current non-users switch?
Market validation - watching startups prove this first - eliminates the discovery risk entirely before you invest in product development.
Reduction opportunities
They solve friction hiding in existing usage patterns. P&G's Dawn EZ-Squeeze came from observing consumers flip bottles and bang them on counters. The killer uncertainty is invisible: potential users do not report normalized workarounds. Only behavioral observation surfaces what user interviews miss.
Classify your initial product idea into one of these six types before selecting any product validation technique. This is not a perfect system. It is a forcing function that makes you name what you actually do not know yet - and that knowledge determines everything that follows in your development process.
Step #2: Product validation testing - four methods, one decision rule
Once you have named your killer uncertainty, select the product validation method that addresses it most directly. There are four. Each operates at a different cost, timeline, and fidelity level.
Method #1: Behavioral validation - observe what real users actually do
What it tests
Usage patterns, the say-do gap, ergonomic friction, unconscious workarounds in your target market.
How it works
Send teams into real homes to observe actual product use. Mine customer feedback from reviews, social media, and service transcripts. Run in-home user testing with near-final products before manufacturing commitment. Deploy behavioral sensors and usage data collection across product lines to catch user pain points before the final product is locked.
SharkNinja validates with 275,000+ consumer interactions during discovery. Teams enter 100+ homes to observe what people do versus what they say in customer conversations. Near-final products go into 750 homes for four weeks.
That process generates 200+ product iterations per product before launch - and generated the self-cleaning brush roll. Eight consumers cut hair off the brush roll with scissors, said the product worked great, and revealed an unspoken user pain point that became a category-defining feature.
The say-do gap is where behavioral product validation testing earns its cost.
Consumers stop noticing what is broken. They stop reporting normalized friction in customer feedback. User interviews cannot surface what users have stopped thinking about. The only way to identify these pain points is to watch real users interact with the product in actual use contexts.
Cost and timing
$200,000 to $500,000 per concept. 8 to 12 weeks. Deploy for launches where a failed product costs $5,000,000 or more.
Do not use
When no existing usage behaviors exist to observe, when the budget is under $200,000 per concept, or when the product is digital and can iterate quickly post-launch based on real usage data.
Method #2: AI validation - simulate thousands of variants before you build one
What it tests
Formulation performance, component combinations, consumer response across target audience demographic segments. Best for acceleration opportunities requiring high iteration volume.
How it works
Build a simulation platform trained on 3 to 5 years of category performance data. Run thousands of digital variants at $5,000 to $10,000 each. Identify top performers.
Then, validate those with physical prototypes. The validation process shifts from testing everything physically to testing only the most promising candidates.
Unilever's Azure Quantum Elements simulates molecular interactions to identify which formulation spaces show promise. Instead of testing 100 physical compounds to find 5 viable candidates, simulate 10,000 digitally and test only the 10 most promising physically. Validation cost reduction reaches 95%.
Cost and timing
$5,000 to $10,000 per test after platform setup of $100,000 to $300,000. 4 to 6 weeks after deployment. Testing 50 concepts costs $250,000 to $500,000 digitally versus $2,500,000 to $10,000,000 through traditional validation testing.
Do not use
When testing fewer than 20 concepts annually (setup cost not justified), when insufficient historical data exists to train accurate models, or when the product depends on physical usage patterns that cannot be simulated.
Method #3: Algorithmic validation - let real purchases decide between variants
What it tests
Variant preference, messaging resonance, and demand signals from actual target customers in trend-sensitive categories. Best for cyclicality and divergence opportunities.
How it works
Launch product variants on algorithmic platforms with creator networks seeding different hooks simultaneously. The algorithm processes millions of consumer interactions in 2 to 4 weeks, selecting winners through actual purchase behavior rather than stated preference in focus groups.
Mars Wrigley launched Skittles Pop'd exclusively on TikTok Shop in October 2024, testing Original and Sour variants simultaneously.
The algorithm was validated, which performed better within weeks. Retail rollout followed in early 2025. National demand validation costs $5,000 to $10,000 versus $100,000 to $500,000 for a traditional test market, taking 6 months.
Cost and timing
$10,000 to $50,000 per concept. 2 to 4 weeks at national scale.
Do not use
When product value cannot be demonstrated in a 10 to 30 second video, when the product requires an extended trial to evaluate (skincare results, durability), or when you are testing whether demand exists rather than which variant wins.
Method #4: Market validation - let startups prove market demand first
What it tests
Category-level demand, product-market fit at scale, distribution economics. Best for redirection opportunities where startups have already entered the space.
How it works
Track 100+ emerging brands quarterly. Invest $2,000,000 to $3,000,000 for minority stakes at $20,000,000 revenue with buyout rights. Observe for 3 to 4 years while the market validates distribution, unit economics, and repeat purchase rates from real customers. Acquire at $300,000,000 to $1,200,000,000 once market validation is complete.
PepsiCo paid $1,200,000,000 for Siete Foods in January 2025 - a grain-free tortilla brand already doing $200,000,000 in revenue. Mondelēz bought Clif Bar for $2,900,000,000 in 2023.
Neither company spent a dollar discovering the grain-free or energy bar opportunities. Startups completed the discovery and prioritization validation stages. Market demand was proven before the acquisition.
Cost and timing
$2,000,000 to $3,000,000 minority stake. 3 to 4 year observation period. Run a portfolio of 5 to 8 bets expecting 1 to 2 acquisitions. Total portfolio exposure: $15,000,000 to $25,000,000.
Do not use
When you need validation results in under 2 years, when internal build costs are under $5,000,000 (acquisition premium not justified), or when your category requires proprietary manufacturing that startups cannot replicate.
Step #3: Integrating the product validation process into every development stage
The framework above gives you the right method. This step shows your product development team where to deploy it across the full development process.
The old product development process ran one expensive test between stages. Cheap validation changes the economics. You can now validate continuously throughout each stage - multiple times - for a fraction of the traditional cost.
In the discovery stage: validate the problem before the product concept
Use Google Trends and social listening to identify emerging market trends before writing a single product brief. Run customer interviews and customer conversations to understand context, motivations, and user pain points before committing to any direction. Conduct user research to map how potential users currently solve the problem. Gather feedback from early adopters in your target market before defining the product concept. Your goal here is not a validated product. It is a validated problem worth solving.
Avoid relying exclusively on customer feedback surveys during this stage. Surveys generate biased feedback because respondents answer hypothetically, not based on actual behavior. Customer conversations and user interviews produce more accurate insights when focused on behavior - what users currently do - rather than preferences - what users say they would want.
In the prioritization stage: kill weak ideas cheaply
Run fake door testing using a landing page with a call to action that measures actual click-through from your target audience rather than stated interest. Track user interest from potential customers before building anything. Run usability tests on mockups. Conduct beta tests with early adopters who match your target users. Use AI simulation to test 10 to 50 variants simultaneously.
Kill the bottom 95% of early ideas here at $10,000 each. Not at $1,000,000 each in the build stage. Use Google Forms or structured survey tools to collect feedback systematically from potential users - but always pair stated feedback with behavioral signals. What early adopters click, purchase, and use repeatedly matters more than what they say they prefer.
In the build stage: validate execution, not just demand
Deploy behavioral sensors and usage data collection from early versions of the actual product. Run user testing with real users on minimum viable product releases. Track how users interact with early iterations. Use customer feedback loops to identify usability issues before the final product is locked. Generate multiple iterations based on real usage data before committing to manufacturing at scale.
P&G's AI Factory platform now operates across 80% of its global business, running product validation testing across thousands of formulations while connected sensors capture actual usage data from real users in real homes. The product development team arrives at launch with validation data from millions of actual interactions, not projections from controlled panels.
The key shift: Product validation becomes an ongoing process across the full development phase, not a gate event between stages. Product managers make resource decisions based on actual validation data from real users rather than single expensive research studies conducted before each gate.
Product validation techniques: what to do when your ideal method is not feasible
If the ideal product validation technique for your opportunity type is not within budget or timeline, adapt. Do not abandon validation.
Time-constrained
Algorithmic product validation testing at $10,000 to $50,000 provides demand signals in 2 to 4 weeks, when behavioral would be ideal, but takes 12 weeks.
Less perfect validation data beats no validation data. Build market validation capabilities - a corporate venture arm and acquisition playbooks - for future development cycles.
Risk-constrained
AI simulation reduces validation cost by 95% while maintaining analytical rigor. Build simulation platform infrastructure as a strategic capability if your product development team tests 20+ concepts annually.
Budget-constrained
Creator networks and platform relationships enable algorithmic validation at the lowest cost of any method. A $10,000 TikTok Shop test provides more purchase behavior data from real users than a $500,000 traditional test market.
Insight-constrained
Behavioral validation reveals user pain points that consumers cannot articulate. If your product development team consistently discovers usability issues post-launch, build an in-home testing infrastructure as a core capability rather than a project-by-project expense.
The product validation process should not start with what you can afford. It should start with what you need to know - and the cheapest method that generates valid, behavioral data from your actual target users.
How ITONICS supports the product validation process
Running continuous product validation across the full development process requires infrastructure that connects signals, methods, validation data, and decisions in one environment. Most product development teams manage this across spreadsheets, disconnected existing tools, and manual reporting - creating gaps between validation efforts and the portfolio decisions they should inform.
ITONICS provides the operating system for structured innovation and product validation.
/Still%20images/Workflow%20Builder%20Mockups%202025/portfolio-use-approval-workflows-2025.webp)
Exhibit 2: Establish fast decision processes to approve feature scope, to release content, and roadmap changes
Product development teams use ITONICS to
-
monitor consumer signals and market trends before idea generation begins,
-
run evaluation pipelines that track validation data from early ideas through final product decisions,
-
connect customer feedback and user research findings directly to portfolio prioritization, and
-
give product managers and stakeholders real-time visibility into validation evidence at every stage-gate review.
The result: product development teams arrive at gate reviews with validation data from real users rather than opinions. Product managers make resource decisions based on actual market demand signals. Validation becomes a shared, structured ongoing process across the full development phase - not something that happens once before a launch.
FAQs on product validation
What is the most important step in any product validation process?
Naming the killer uncertainty before selecting a product validation technique. Every product concept has one core assumption that, if wrong, sinks the initiative.
Is there a genuine market need? Will real users actually use it as expected? Which variant wins with your target audience?
Define that assumption first. Then select the cheapest product validation method that tests it directly with actual behavior from real users rather than stated preferences from surveys.
When should you use user interviews versus behavioral observations?
Use user interviews and customer conversations to understand context, motivations, and user needs during early idea generation.
Use behavioral observation when you suspect a gap between what potential users say and what they do - particularly for daily-use physical products where users have normalized workarounds they no longer report in customer feedback.
If users would need to reconstruct behavior from memory to answer your question, observe actual behavior instead of asking.
What is fake door testing, and when does it work?
Fake door testing presents a product concept - typically via a landing page, email campaign, or in-app prompt - and measures actual user behavior such as clicks and sign-up rates rather than stated user interest.
It works when you need to validate genuine interest from potential customers before building anything. It does not work for products requiring physical experience to evaluate.
It is most useful in the prioritization stage to kill weak product ideas before development costs accumulate. Always pair fake door results with follow-up customer interviews to understand why users clicked or did not click.
How do you run a beta test effectively?
Define what specific user behavior or usage data would confirm or deny your core assumptions before recruiting any beta testers. Recruit from your actual target users - not colleagues, internal teams, or the most enthusiastic early adopters, who generate biased feedback.
Set a fixed time window of 2 to 4 weeks. Collect feedback systematically through structured user interviews, usage data tracking, and direct observation rather than open-ended surveys alone. Use Google Forms or similar tools to standardize feedback collection. Act on what users do with the product, not only what they say about it.
How do you avoid biased feedback in product validation testing?
Remove social pressure from every feedback touchpoint. Users in focus groups or user interviews with company representatives consistently soften negative feedback.
Run unmoderated usability tests where users interact with the product alone. Weight behavioral signals - purchase rates, session length, feature usage frequency - over satisfaction scores.
When collecting customer feedback directly, ask about actual behavior rather than opinions: "Walk me through the last time you used this" produces more accurate validation data than "What do you think of this feature?"
When should you define a minimum viable product?
After user research or behavioral observation has confirmed a specific user pain point worth solving, but before significant development investment.
The minimum viable product should be the smallest version that tests your core assumption about value delivery - not the smallest version you can build quickly.
Define the value proposition first. Then define the minimum feature set required to test whether that value proposition delivers a positive user experience for real users in real contexts during actual use.
How do you validate a pricing model before launch?
Run willingness-to-pay tests using real purchase behavior rather than stated preferences. Pricing surveys consistently overestimate price tolerance from potential customers. Fake door testing with different price points on a landing page reveals actual purchase intent at different price levels.
An algorithmic platform launches with different pricing tiers, generating real purchase data from your target audience within weeks. For subscription products, track conversion rates at different price points during a beta test with early adopters.
For physical products, analyze unit economics from comparable startups during a market validation observation period before committing to your own pricing model.