How to define problems worth solving before you build anything, using synthetic research to validate faster than ever
The Single Most Expensive Mistake in Product Development
Most products fail. This is not news. What's genuinely odd is that we've known the primary cause for decades, and yet founders and product managers keep making the same mistake: they build things nobody asked for.
Y Combinator's famous motto, "Build something people want," exists precisely because this is so difficult to do. CB Insights data shows 42% of startups fail because there's no market need for their product. Not because of funding. Not because of competition. Not because of team issues. Because they built something nobody wanted.
The root cause? Poor problem framing. The product manager who asks "What features should we add?" before asking "What problem are we actually solving?" has already set the project up for failure. It's the equivalent of a doctor prescribing medication before diagnosing the illness.
This article is the first in a 12-part series on how to perform world-class product research using Claude Code and Ditto. We start here because problem framing is where everything else begins. Get this wrong, and nothing downstream will save you.
What Is Problem Framing?
Problem framing is the discipline of defining what problem you're solving, for whom, and why it matters now.
It sounds deceptively simple. In practice, it requires resisting the constant gravitational pull toward "solution speak." Stakeholders walk into meetings with features they want. Engineers suggest technical approaches. Executives have visions. Everyone has opinions about what to build. Very few people want to pause and ask: what's the actual problem?
The best problem framers I know operate like investigators. They remain curious when everyone else is certain. When someone says "We need a mobile app," they ask "Why?" When the answer is "Because competitors have one," they ask "What problem does the app solve that our current solution doesn't?" They keep asking until they reach something that sounds like human need rather than technical specification.
The Problem Statement Template
A well-framed problem follows this structure:
[Target user] needs a way to [user need/goal] because [insight about why current solutions fail].
For example:
"Middle managers need a way to track project status across teams because existing tools require manual updates that nobody has time for."
"First-time parents need a way to understand infant sleep patterns because contradictory advice creates anxiety and exhaustion."
"Small business owners need a way to forecast cash flow because spreadsheets can't connect to their banking and invoicing systems."
Notice what's absent from these statements: any mention of specific features, technologies, or solutions. The problem statement constrains the problem, not the solution.
Why Problem Framing Is Harder Than It Sounds
Three forces conspire against good problem framing:
1. Solution Attachment
Founders often start with a solution they're excited about. "We're building an AI-powered dashboard" is a solution. The problem might be "executives can't get timely information about their business" - but if you've already committed to "AI-powered dashboard," you've closed off alternative approaches that might solve the problem better, faster, or cheaper.
2. Proxy Problems
Sometimes the stated problem isn't the real problem. "We need better collaboration tools" might actually mean "We have unclear roles and responsibilities, so people duplicate work and step on each other's toes." Better collaboration tools won't fix an organisational design problem.
3. Assumed Knowledge
Product managers frequently assume they understand the problem because they've experienced something similar themselves. This is particularly dangerous because it feels like empathy but is actually projection. Your experience is not your customer's experience.
How Ditto Transforms Problem Framing
Here's where things get interesting.
Traditional problem framing requires either extensive customer interviews (expensive, slow) or dangerous assumptions (free, fast, wrong). Ditto offers a third path: synthetic research with statistically-grounded AI personas that can validate your problem hypothesis in hours rather than weeks.
What Ditto Does
Ditto is a synthetic market research platform with 300,000+ personas based on census data and behavioural research. EY validated 95% correlation with traditional research methods. You can:
Create research groups with demographic filters (country, state, age, gender, parental status, education, employment)
Ask open-ended qualitative questions to 6-20 personas
Receive rich, narrative responses (not yes/no)
Get AI-generated insights: segments, divergences, shared mindsets
Share results with stakeholders
Complete studies in minutes, not weeks
Why Synthetic Research Works for Problem Framing
Problem framing requires answers to questions like:
Does this problem actually exist?
How do people describe this problem in their own words?
How severe is the problem (nuisance or crisis)?
How frequently does it occur?
What are people doing about it today?
Synthetic personas can answer all of these. Unlike surveys, they provide qualitative depth. Unlike interviews, they're available immediately. Unlike assumptions, they're grounded in actual behavioural data.
Case Study: CareQuarter's Problem Discovery
CareQuarter was exploring elder care coordination services. The founding hypothesis was straightforward: adult children managing ageing parents are overwhelmed by TIME - too many tasks, not enough hours.
Their first Ditto study recruited 12 synthetic personas: US adults aged 45-65, all managing healthcare for at least one ageing parent. Seven open-ended questions explored healthcare admin burden, moments of highest stress, and current coping mechanisms.
What the research revealed was unexpected.
The pain wasn't primarily about time. The dominant theme that emerged across nearly every persona was:
"I'm responsible without real authority in a system that's chopped into pieces."
This was not the expected finding. The research revealed that the pain is structural: these customers are the de facto care coordinators for a healthcare system that provides no formal role, no legal standing, and no unified record for the person doing the coordination.
Specific pain points that surfaced:
Portal fragmentation: Every provider, pharmacy, lab, and insurer uses a different system. The family caregiver becomes the "human API" connecting them.
Prior authorisation ping-pong: Insurer blames provider, provider blames insurer, pharmacy shrugs. The caregiver is the only one who follows through.
HIPAA purgatory: Legal authorisation signed but not visible in provider systems. Treated as a stranger despite being the primary decision-maker.
Friday 4pm fires: Hospital discharge calls that arrive at the worst possible moment, with nothing arranged - meds changed, home health not scheduled, pharmacy waiting on prior auth.
The 2am worry spiral: Persistent background anxiety about what's being missed, compounded by the knowledge that one error could cascade.
The strategic implication: CareQuarter didn't need to build a task management app or a reminder service. The research pointed toward a fundamentally different product: a human coordinator with real legal standing to act on behalf of the family - the role these customers were already performing, unpaid and unsupported.
That phrase - "responsible without authority" - came from participants, not assumptions. It became the foundation of their entire product positioning and informed every subsequent research phase.
How Claude Code Accelerates the Workflow
Claude Code serves as your research assistant throughout this process. Rather than manually crafting API calls, managing responses, and synthesising findings, Claude Code can:
1. Design Research Studies
Describe your problem hypothesis, and Claude Code will draft appropriate questions, suggest demographic filters, and structure the study for maximum insight.
2. Execute the Research
Claude Code interfaces directly with Ditto's API - creating research groups, running studies, polling for completion, and extracting results.
3. Synthesise Findings
Raw responses become actionable insights. Claude Code identifies patterns, highlights surprising findings, and formats results for stakeholder communication.
4. Iterate Quickly
When first results surface new questions (they always do), Claude Code can spin up follow-up studies immediately.
Step-by-Step Workflow: Validating a Problem Hypothesis
Let's walk through the complete workflow for problem framing using Claude Code and Ditto.
Step 1: Define Your Problem Hypothesis
Before running any research, articulate what you believe to be true. This forces clarity.
Write your hypothesis as a problem statement:
[Target user] needs a way to [user need/goal] because [insight about why current solutions fail].
Be specific about:
Who has this problem (not "everyone" - narrow it)
What they're trying to accomplish
Why current solutions aren't working
Example hypothesis: "Auto service shop owners need a way to source parts quickly because current catalogues have unreliable inventory data, causing lost time and unhappy customers."
Step 2: Design the Research Group
Create a Ditto research group matching your target user:
```json { "name": "Problem Hypothesis Validation - Auto Service Shop Owners", "group_size": 10, "filters": { "country": "USA", "age_min": 30, "age_max": 55, "employment": "self_employed" } } ```
Key principle: Start broad. You're testing whether the problem exists, not who has it most acutely (that comes later in User Segmentation).
Step 3: Draft Non-Leading Questions
The goal is to discover whether your hypothesis is correct, not to confirm what you already believe. Avoid describing any solution. Let participants tell YOU what they want.
Here's the 7-question framework for problem validation:
# | Purpose | Question |
|---|---|---|
1 | Establish context | "Walk me through a typical day when you need to [do task related to problem area]. What does that process look like?" |
2 | Surface problems | "What's the most frustrating part of [task/area]? Tell me about a time when it was particularly painful." |
3 | Quantify impact | "How much time per week do you spend dealing with [problem area]? What's the cost of that to you personally or professionally?" |
4 | Validate problem exists | "On a scale of 1-10, how big of a problem is [problem] for you? Why that number?" |
5 | Explore alternatives | "What do you currently do to deal with [problem]? What works? What doesn't?" |
6 | Test importance | "If you could fix ONE thing about [area], what would it be and why?" |
7 | Check active seeking | "Have you ever looked for solutions to [problem]? What happened?" |
Step 4: Run the Study
Using Claude Code, execute the study:
``` Create a Ditto study to validate whether auto service shop owners have a significant problem with parts sourcing. Use the 7-question problem validation framework. Target 10 US-based self-employed adults aged 30-55. ```
Claude Code will:
Create the research group with appropriate filters
Create the study with your questions
Submit questions to the panel
Poll for completion (typically 5-15 minutes)
Extract and summarise responses
Step 5: Analyse Results
When the study completes, look for:
Problem Existence Do participants actually experience this pain? How do they describe it? What language do they use?
Problem Severity Is this a minor annoyance or a significant issue? Look for emotional intensity, quantified costs, and frequency mentions.
Current Workarounds What are they doing today? Workarounds indicate real need - people don't create hacks for problems they don't care about.
Active Seeking Have they looked for solutions? If yes, you have an engaged market. If no, you may need to question whether the problem is severe enough.
Step 6: Make a Go/No-Go Decision
Based on results, decide:
Strong signal - Problem exists, is severe, and people are actively seeking solutions. Proceed to Discovery Research (Stage 2).
Weak signal - Problem exists but isn't severe enough to drive action. Refine hypothesis or pivot.
No signal - Problem doesn't exist or isn't a problem. Stop. Find a different problem.
Real-World Example: MotorMinds
MotorMinds, an auto parts sourcing startup, ran exactly this workflow. Their hypothesis: auto service shop owners waste significant time on parts sourcing due to unreliable inventory data.
The study recruited 10 auto service professionals using the 7-question non-leading framework. No mention of MotorMinds' solution. No description of AI or automation. Just questions about how they currently source parts and what frustrates them.
What the Research Revealed:
The problem was real - and worse than expected:
Time sink: 10-20+ hours per week on parts sourcing
Quantified cost: "$400-$800/week in lost profit" (direct quote from a former parts counter worker)
#1 Pain: "Ghost inventory" - parts listed as available that aren't
Severity score: Average 8.3/10 frustration rating
Participant voices told the story:
James Neri, a maintenance technician: "Every day. Almost every job needs a part."
Michael Mclimans, an automotive tech: "If it says 10:30, it rolls in at 10:30. No mid-year misses."
Sonny Carrizales, a former auto parts salesman, quantified the cost precisely: "$400-$800 per week in lost profit" from sourcing delays and errors.
But here's where problem framing gets interesting.
The magic wand question ("If you could fix ONE thing...") revealed something unexpected. The consensus wasn't "faster search" or "better prices." It was:
"Real, guaranteed ETAs with tracking at quote time, VIN-locked."
Founders assumed the problem was speed - finding parts faster. Participants revealed the problem was accuracy - trusting that the part is actually available and will arrive when promised.
Why previous solutions failed:
"Fake inventory" - parts listed as available that weren't there
VIN split errors - wrong part for the vehicle
Clunky UX, apps that crash
No integration with existing shop management systems
This distinction - speed vs. accuracy - shaped the entire product direction. MotorMinds could have built the fastest parts search in the industry and still failed, because speed wasn't the problem. Trust was.
The Pattern: Founders Are Often Wrong About the Problem
These examples aren't anomalies. Across multiple problem framing studies, we've seen a consistent pattern: the problem founders assume is rarely the problem customers describe.
Startup | Assumed Problem | Actual Problem (from research) |
|---|---|---|
CareQuarter | Too many tasks, not enough time | "Responsible without authority" - structural, not time-based |
MotorMinds | Need to find parts faster | Need to trust inventory accuracy |
PatientCompanion | Patients can't reach staff | Staff arrive blind with no context ("call button is binary") |
Sidian | Engineers need AI assistants | Engineers need automated takeoffs specifically |
In every case, the research reframed the problem in a way that changed the product direction. This is why problem framing isn't optional - it's the difference between building something people want and building something you assumed they wanted.
PatientCompanion is a particularly instructive example. The startup was building an AI communication system for elderly patients in care facilities. The assumed problem: patients struggle to communicate their needs.
Research with 20 elder care staff revealed a different framing entirely:
"The call button is broken. It's binary - 'help' with no context. Staff arrive blind and often need multiple trips."
The magic wand response was unanimous: "Context before arrival." Staff wanted to know what patients needed BEFORE walking in - "bathroom," "pain," "water," "lonely" - the basics. The product wasn't about helping patients communicate; it was about giving staff information to do their jobs efficiently.
Common Mistakes in Problem Framing
Mistake 1: Framing the Problem as the Absence of a Solution
"We need a mobile app" is not a problem statement. Neither is "We need better analytics" or "We need an AI assistant." These are solutions masquerading as problems.
Fix: Ask "Why?" until you reach human need. "We need a mobile app" → Why? → "To check inventory on the floor" → Why is that hard now? → "The desktop system is in the back office" → Why is that a problem? → "I lose 10 minutes per customer walking back and forth."
The problem is "Checking inventory interrupts customer service and wastes time." The solution might be a mobile app, or it might be moving a monitor to the floor, or it might be something else entirely.
Mistake 2: Framing Too Broadly
"Improve the user experience" is too broad to be useful. So is "help small businesses grow" or "make data more accessible."
Fix: Narrow to specific user, specific context, specific task. "Help first-time Shopify merchants understand which products to promote during their first Black Friday."
Mistake 3: Framing Too Narrowly
"Add a button to export data as CSV" is too narrow. You've jumped to implementation before understanding need.
Fix: Back up. Why do they want CSV export? What do they do with the data? Maybe they need CSV because they're doing analysis in Excel - but maybe a built-in analysis feature would serve them better.
Mistake 4: Assuming You Know the Problem
The most dangerous assumption is that you understand the problem because you've experienced something similar. Your experience as a PM is not the same as your customer's experience as a small business owner.
Fix: Treat every hypothesis as unproven until research confirms it. Even if you're 90% confident, run the validation study. You'll be surprised how often "obvious" problems aren't.
Mistake 5: Skipping Validation Because You "Don't Have Time"
The irony: teams skip problem validation to save time, then spend 6-12 months building something nobody wants. With Ditto and Claude Code, a validation study takes hours, not weeks.
Fix: Build problem validation into your standard workflow. It's not optional.
What Good Output Looks Like
At the end of problem framing, you should have:
✅ Validated problem statement in customer language Not what you assumed - what participants actually said. Use their exact words.
✅ Evidence of problem severity Time cost, money cost, emotional intensity, frequency. Quantified wherever possible.
✅ Initial understanding of current workarounds What people do today (and why it's not good enough).
✅ Go/no-go signal Clear decision about whether to proceed.
✅ Unexpected insight What did you learn that you didn't expect? This often becomes your differentiation.
Example 1: MotorMinds Output
Problem Statement (Validated): Auto service shop owners need reliable, real-time parts availability data with guaranteed ETAs because current catalogue systems show "ghost inventory" that wastes 10-20 hours per week and costs $400-800 in lost profit. Evidence: 8/10 participants rated parts sourcing as 8+ out of 10 in terms of frustration. "Ghost inventory" mentioned by 7/10 participants unprompted. Direct quote: "If it says 10:30, it rolls in at 10:30. No mid-year misses." Current Workarounds: Phone calls to multiple suppliers (time-consuming), maintaining relationships with specific reps who know actual stock (not scalable), ordering from multiple sources "just in case" (expensive). Unexpected Insight: The problem isn't speed - it's trust. Participants don't want faster search; they want accurate inventory. This reframes the entire value proposition. Decision: PROCEED - Problem is real, severe, and costly. Proceed to Discovery Research to understand the full competitive landscape and specific requirements.
Example 2: CareQuarter Output
Problem Statement (Validated): Adult children coordinating elder care need legitimate authority within the healthcare system because they are "responsible without real authority in a system that's chopped into pieces." Evidence: Universal theme across 12 personas. Specific pain points: portal fragmentation, prior auth ping-pong, HIPAA purgatory, Friday 4pm discharge calls. Current Workarounds: Becoming the "human API" between systems, maintaining duplicate records, following up on every provider interaction personally, accepting the role of unpaid case manager. Unexpected Insight: The problem isn't time management - it's structural authority. The product should be a person (coordinator with legal standing), not a platform (task management app). Decision: PROCEED - Problem is real and underserved. Proceed to Phase 2 to understand trust requirements and authority preferences.
How This Feeds Into the Next Stage
Problem framing establishes that a problem worth solving exists. It doesn't tell you:
How deep the problem goes (Discovery Research)
Who has it most acutely (User Segmentation)
What solutions exist today (Competitive Analysis)
What solution would work (Concept Testing)
The next stage, Discovery Research, takes your validated problem and explores it deeply. You'll understand not just that the problem exists, but how people experience it, what triggers it, and what they've tried before.
Quick Reference: Problem Framing Checklist
Before Research:
[ ] Problem hypothesis written as "[User] needs [goal] because [current solutions fail]"
[ ] Target user defined specifically (not "everyone")
[ ] No solutions embedded in the problem statement
Research Design:
[ ] Research group filters match target user
[ ] 7 non-leading questions drafted
[ ] No solution described in questions
After Research:
[ ] Problem existence confirmed/disconfirmed
[ ] Severity quantified (time, money, emotion)
[ ] Language captured (how do they describe it?)
[ ] Current workarounds documented
[ ] Go/no-go decision made
Further Reading
Explore the Research Yourself
All the studies referenced in this article are publicly available. You can read the actual questions asked, see the full persona responses, and explore the AI-generated insights:
CareQuarter Phase 1: Pain Discovery - 12 personas on elder care coordination
MotorMinds: Auto Parts Sourcing - 10 auto service professionals
PatientCompanion: Elder Care Communication - 20 healthcare staff
These studies demonstrate what's possible when problem framing is done rigorously - and how synthetic research can surface insights that change product direction before a single line of code is written.
This is article 1 of 12 in the Product Manager's Research Toolkit series. Next: Discovery Research - understanding the full depth and context of validated problems.
Want to run your own problem validation study? Ditto lets you test hypotheses with synthetic personas in hours, not weeks. Learn more at askditto.io


