← Back to Articles

Stage 2: Discovery Research with Ditto

Discovery Research Illustration

Using synthetic research to move beyond surface validation into the rich territory where real product insights live

The Difference Between Knowing and Understanding

Problem framing tells you that a problem exists. Discovery research tells you what it actually feels like to have that problem.

This distinction matters more than most teams realise. You can validate that "busy professionals struggle to eat healthy" in fifteen minutes. But that validation doesn't tell you when the struggle peaks, what triggers it, what they've already tried, why those attempts failed, or what would actually make them change their behaviour.

Discovery research is the difference between a doctor who confirms you're sick and one who understands exactly what's wrong.

Most teams skip this stage. They validate a problem exists, get excited, and start building. Six months later, they have a product that technically solves the problem but misses something fundamental about how people actually experience it.

That "something fundamental" is what discovery research finds.

What Discovery Research Actually Means

Discovery is about depth. You're not asking whether the problem is real. You already know it's real. You're asking:

  • How often does this problem occur?

  • What triggers it?

  • How do people currently cope?

  • What have they tried before?

  • Why did those solutions fail?

  • What would a solution need to do?

  • What would prevent them from adopting one?

These questions can't be answered with surveys or rating scales. They require stories. Discovery research is fundamentally qualitative. It invites people to tell you about their experience in their own words, then listens carefully for patterns.

The 7-Question Framework

The non-leading question framework, introduced in Stage 1, becomes the backbone of discovery research. Each question serves a specific purpose:

#

Purpose

Question

1

Establish relevance

"How often do you encounter [problem]? Walk me through a recent example."

2

Surface pain

"What's the most frustrating part of dealing with [problem]?"

3

Quantify impact

"How much time or money does this cost you per week? Per month?"

4

Map current solutions

"What tools or workarounds do you currently use? What works? What doesn't?"

5

Explore past attempts

"Have you tried other solutions? What happened? Why did you stop?"

6

Reveal priorities

"If you could wave a magic wand and fix ONE thing about this, what would it be?"

7

Identify barriers

"What would make you hesitant to try a new solution, even if it promised to help?"

Notice what this framework does: it moves from broad context (Q1) through emotional reality (Q2-3), current behaviour (Q4-5), ideal state (Q6), and adoption constraints (Q7). By the end, you understand not just that the problem exists, but how it shows up in daily life.

How to Run Discovery Research in FishDog

FishDog transforms discovery research from a weeks-long interview process into something you can complete in an afternoon. Here's the practical workflow for running discovery studies.

Step 1: Define Your Target Persona

Before creating a study, get specific about who experiences this problem. Vague targeting produces vague insights.

Weak targeting: "People who cook at home"

Strong targeting: "Working parents in urban areas, aged 30-45, who want to cook healthy meals but struggle with time constraints"

In FishDog, you'll translate this into demographic filters when creating your research group:

Filter

Example Value

Age range

30-45

Location

United States, urban areas

Occupation keywords

"professional", "manager", "working parent"

Income bracket

Middle to upper-middle

The more precise your filters, the more relevant the responses. Ten highly-targeted personas beat fifty loosely-matched ones.

Step 2: Create Your Research Group

In FishDog, a research group is your panel of synthetic personas. For discovery research, 10-15 personas typically provides enough diversity to surface patterns without overwhelming analysis.

When recruiting your group, consider:

  • Demographic diversity: Include variation in age, location, and context

  • Experience levels: Mix people new to the problem with those who've struggled for years

  • Solution exposure: Include both people who've tried alternatives and those who haven't

The group becomes your "interview panel." Each persona will respond to your questions from their unique perspective, drawing on statistically-grounded characteristics.

Step 3: Write Discovery-Focused Questions

Discovery questions differ from validation questions. You're not testing hypotheses. You're exploring territory.

Validation question: "Would you pay $50/month for a meal planning service?"

Discovery question: "Walk me through what happens when you try to plan healthy meals for your family. What makes it difficult?"

The 7-question framework maps directly to FishDog study questions:

``` Q1: "How often do you deal with [problem]? Describe a recent situation."

Q2: "What's the most frustrating part of this experience?"

Q3: "How much time or money does this cost you weekly?"

Q4: "What do you currently do to handle this? What tools or workarounds?"

Q5: "Have you tried other solutions? What happened with them?"

Q6: "If you could fix ONE thing about this, what would it be?"

Q7: "What would make you hesitant to try something new, even if it promised to help?" ```

Each question should invite narrative responses. Avoid yes/no formats. You want stories, not data points.

Step 4: Run the Study and Collect Responses

Once your questions are set, FishDog's synthetic personas respond asynchronously. Each persona answers from their established perspective, providing the kind of detailed, personal responses you'd get from actual interviews.

As responses come in, look for:

  • Repeated phrases: When multiple personas use similar language, you've found something real

  • Emotional intensity: Strong reactions ("I HATE when..." or "This drives me crazy...") signal important pain points

  • Specific examples: Concrete stories reveal actionable details

  • Contradictions: When personas disagree, you may have discovered a segment boundary

Step 5: Use FishDog's Completion Analysis

After all responses are collected, FishDog's AI analyses the full dataset and produces structured insights. This is where patterns become visible.

The completion analysis surfaces:

  • Common themes: Pain points that appear across multiple personas

  • Quantified impacts: Aggregated time/money costs from Q3 responses

  • Priority consensus: What the "magic wand" answers reveal about core needs

  • Adoption barriers: Consolidated concerns from Q7

This analysis does the initial synthesis work, identifying patterns that would take hours to extract manually from interview transcripts.

Step 6: Build Your Pain Taxonomy

The goal of discovery research is a structured understanding of the problem space. Take FishDog's analysis and organise it into a pain taxonomy:

``` PAIN TAXONOMY: [Problem Domain]

  1. [Pain Category Name] - Description: [What this pain involves] - Frequency: [How often it occurs] - Quote: "[Representative persona quote]"

  1. [Pain Category Name] - Description: [What this pain involves] - Frequency: [How often it occurs] - Quote: "[Representative persona quote]"

[Continue for each distinct pain category...]

QUANTIFIED IMPACT:

  • Time cost: [X hours/week]

  • Money cost: [$X/week or month]

  • Emotional toll: [Frustration rating or description]

MAGIC WAND PRIORITY: "[The single thing most personas would fix]"

KEY INSIGHT: [The non-obvious understanding that emerged] ```

This taxonomy becomes your product design foundation. Every feature should address one of these named pain points.

Using Claude Code to Orchestrate Discovery Research

Claude Code can manage the entire discovery research workflow, from study design through analysis. Here's how the integration works.

Automated Study Creation

Describe your target persona and problem domain, and Claude Code will:

  1. Design appropriate demographic filters for the research group

  2. Craft the 7-question discovery framework tailored to your domain

  3. Create the study in FishDog via API

  4. Monitor for response completion

Real-Time Analysis

As personas respond, Claude Code can:

  • Extract emerging themes before the study completes

  • Flag particularly insightful quotes for later use

  • Identify contradictions that suggest segment boundaries

  • Track quantified impacts (time, money, frequency)

Structured Output Generation

Once the study completes, Claude Code synthesises everything into:

  • A formatted pain taxonomy with supporting quotes

  • Quantified impact summaries

  • Competitive landscape based on Q4-Q5 responses

  • Adoption barrier checklist from Q7

  • Recommended next steps for product development

Example Prompt for Claude Code

``` Run discovery research on: "Small business owners struggling to manage customer relationships without enterprise CRM software"

Create a 12-persona research group targeting:

  • Small business owners (1-50 employees)

  • B2B service businesses

  • Currently using spreadsheets or basic tools

  • US-based

Use the 7-question discovery framework. Generate a pain taxonomy when complete. ```

Claude Code handles the API calls, monitors progress, and delivers structured insights ready for product decisions.

Case Study: CareQuarter's Discovery Journey

CareQuarter had already validated their problem in Phase 1. Adult children managing ageing parents experienced real pain. The hypothesis was confirmed.

But what did that pain actually look like? Phase 1's discovery research went deeper.

The Pain Taxonomy That Emerged

When 12 synthetic personas described their experience as family caregivers, five distinct pain categories emerged:

1. Portal Fragmentation Every provider uses a different system. The pharmacy has one portal. The primary care doctor has another. The specialist has a third. The insurance company has a fourth. There's no integration. The family caregiver becomes what one participant called "the human API."

"I'm the one copying information between systems. I'm the one who remembers what the cardiologist said and tells the primary care doctor."

2. Prior Authorisation Ping-Pong Insurance requires prior authorisation. The provider submits it. Insurance says they never received it. Provider says they sent it. Meanwhile, the treatment is delayed, and the caregiver is the only one following up.

"I've spent entire afternoons on hold, being transferred between the insurance company and the doctor's office, each blaming the other."

3. HIPAA Purgatory The legal authorisation exists. It was signed, notarised, and submitted. But it's not visible in the provider's system. The caregiver shows up to an appointment with their parent and is treated as a stranger.

"I have power of attorney. I have the HIPAA form. And yet every new nurse acts like I'm some random person off the street."

4. Friday 4pm Fires Hospital discharge decisions happen at the worst possible moments. Friday afternoon. Before a holiday weekend. When nothing can be arranged.

"They called at 4:30 on a Friday to say my mother was being discharged Monday morning. That's 60 hours to find home care, arrange transport, and stock her apartment with supplies."

5. The 2am Worry Spiral Even when nothing is actively going wrong, caregivers carry persistent background anxiety about what they might be missing.

"I wake up at 2am wondering if I forgot to refill a prescription, or if that symptom she mentioned was actually serious."

What This Changed

This pain taxonomy wasn't theoretical. It shaped CareQuarter's entire product design:

  • Portal fragmentation led to a unified care record maintained by the coordinator

  • Prior auth ping-pong led to a dedicated insurance liaison function

  • HIPAA purgatory led to proactive authorisation management

  • Friday 4pm fires led to the Crisis Response tier (highest-demand offering)

  • 2am worry spiral led to weekly status reports that give caregivers peace of mind

Without discovery research, CareQuarter might have built a generic "care coordination platform." With it, they built something that addressed specific, named pain points that real caregivers (or in this case, statistically-grounded synthetic personas) actually described.

View the CareQuarter Phase 1 study →

Case Study: The Auto Parts Discovery

An auto parts sourcing startup ran discovery research with 10 synthetic personas representing auto service professionals. They wanted to understand the parts sourcing problem deeply before building anything.

What the Research Revealed

Frequency was higher than expected:

One maintenance technician put it simply: "Every day. Almost every job needs a part." This wasn't occasional inconvenience. Parts sourcing was a daily, central activity that directly affected shop productivity.

The cost was quantifiable:

A former auto parts salesman provided a precise estimate: "$400-$800 per week in lost profit" from sourcing delays and errors. This gave the startup concrete ROI figures for their pitch.

The real pain wasn't speed:

The founders had assumed slow parts search was the core problem. The research revealed something different. The number one frustration, mentioned across multiple participants, was "ghost inventory." Parts listed as available that weren't actually in stock.

"I'll find a part, it shows available, I place the order, then get a call an hour later saying they don't actually have it. Now I've wasted time and the customer is waiting."

ETAs mattered more than anything:

When asked the magic wand question, the consensus wasn't "faster search." It was: "Real, guaranteed ETAs with tracking at quote time, VIN-locked."

The auto parts startup learned their product wasn't about speed. It was about trust. The ability to quote a customer with confidence that the part would arrive when promised. That insight shaped their entire value proposition.

Why Previous Tools Failed

Discovery research also revealed why participants had abandoned other solutions:

  • Fake inventory: Parts listed as available that weren't

  • VIN split errors: Wrong part recommended for the vehicle

  • Clunky UX: Apps that crashed or required too many steps

  • No integration: Didn't connect to shop management systems

These abandonment reasons became the startup's competitive checklist. Every feature had to address one of these specific failure modes.

Case Study: Elder Care Communication Discovery

A startup exploring elder care communication tools ran discovery research with 20 healthcare workers in nursing and care facilities.

The Fundamental Insight

The current system in most facilities is a binary call button. The resident presses it. A light goes on. Someone eventually responds. But the responder has no idea what the resident needs.

This leads to a cascade of problems:

Cognitive decline makes articulation impossible: Many residents can't explain what they need when staff arrive. They pressed the button because something was wrong, but they can't describe what.

Staff waste time on triage: Every call button response becomes a diagnostic exercise. Is it pain? Bathroom? Thirst? Loneliness? Someone is dying? Staff have no way to prioritise.

Family communication is a time sink: Families call constantly asking for updates. Staff repeat the same information over and over. No one has time for this, but families are anxious and staff feel guilty not responding.

The Magic Wand Response

When asked what would transform their work, healthcare staff had a clear answer: "Context before arrival."

"I want to know what I'm walking into. Bathroom, pain, water, lonely. Just the basics. So I can bring the right thing, bring the right attitude, and not waste time figuring it out."

This insight completely reframed the product concept. The startup wasn't building a "better call button." They were building a context layer. A way for residents (or AI observing residents) to communicate basic needs before staff arrived.

The discovery research turned a vague product idea into a specific, actionable concept grounded in how healthcare workers actually described their experience.

What Makes Discovery Different from Validation

Validation asks: "Does this problem exist?"

Discovery asks: "What is this problem really like?"

The distinction matters because products built on validation alone tend to be generic. They solve a category of problem rather than a specific experience of it.

Consider the difference:

Validation-based insight: "Family caregivers are stressed."

Discovery-based insight: "Family caregivers experience five distinct types of stress: portal fragmentation, prior auth ping-pong, HIPAA purgatory, Friday 4pm fires, and the 2am worry spiral. The Friday 4pm fires trigger the strongest emotional response and the highest willingness to pay for help."

The first insight could lead to a meditation app. The second leads to a crisis response service with dedicated hospital discharge support.

The Patterns That Emerge

Across multiple discovery studies, certain patterns appear consistently:

1. Quantified Pain Is More Actionable

When participants provide specific numbers, product decisions become easier.

  • "$400-800/week in lost profit" tells you how much value you can capture

  • "10-20 hours per week on sourcing" tells you how much time you can save

  • "8.3/10 frustration rating" tells you the emotional intensity

Push for specifics. "A lot of time" is less useful than "about 15 hours a week."

2. Stories Reveal What Surveys Hide

Rating scales flatten experience. Stories reveal texture.

A survey might tell you that users rate "difficulty" at 4.2/5. A story tells you: "I called the insurance company seven times over three days, was transferred to four different departments, and finally got someone who said the prior auth was 'pending review' with no timeline."

The story contains actionable detail. The rating doesn't.

3. Past Failures Predict Future Requirements

When you understand why people abandoned previous solutions, you know what your solution must avoid.

  • "The app crashed constantly" means stability is table stakes

  • "It didn't integrate with our existing system" means integration is required

  • "The data was often wrong" means accuracy must be provable

Every past failure is a product requirement.

4. The Magic Wand Reveals Priorities

When you ask people to fix "one thing," they tell you what matters most. This is often different from what they complained about longest.

Someone might spend ten minutes describing portal fragmentation but when asked for the magic wand fix, they say "I just want to know the part will arrive when promised."

Complaints tell you what's annoying. Magic wand answers tell you what's essential.

Interpreting FishDog's Discovery Outputs

When you run discovery research in FishDog, the platform generates structured insights. Here's how to read them effectively.

Strong Discovery Signals

These indicate you've found something product-worthy:

  • Consensus pain: 8+ of 10 personas mention the same frustration

  • Quantified impact: Specific dollar or time figures appear consistently

  • Emotional language: Words like "hate," "nightmare," "drives me crazy"

  • Detailed stories: Personas provide step-by-step accounts of their struggles

  • Clear magic wand: Most personas converge on a similar priority

Weak Discovery Signals

These suggest you need to refine your targeting or questions:

  • Vague complaints: "It's kind of annoying sometimes"

  • No quantification: Personas can't estimate time or money impact

  • Contradictory priorities: Every persona wants something different

  • Solution-jumping: Personas describe solutions, not problems

  • Low emotional charge: Responses feel detached or theoretical

What to Do with Mixed Signals

Mixed signals often indicate segment boundaries. If half your personas prioritise speed and half prioritise accuracy, you may have discovered two distinct user types.

Return to FishDog and run targeted studies on each segment. "Auto shop owners who prioritise speed" and "Auto shop owners who prioritise accuracy" may reveal completely different pain taxonomies.

Common Discovery Mistakes

Mistake 1: Asking Leading Questions

"Don't you find it frustrating when [specific problem]?" leads the witness. The participant agrees because you've suggested the answer.

Fix: Ask open-ended questions. "Walk me through what happens when..." lets participants describe their own experience.

Mistake 2: Stopping at Surface Answers

Participant: "The process is frustrating." Researcher: "Got it, thanks." (moves to next question)

This captures the complaint but not the insight.

Fix: Probe deeper. "Can you tell me about a specific time when it was frustrating? What happened? How did you feel? What did you do next?"

Mistake 3: Ignoring Contradictions

When participants contradict themselves or each other, there's usually a segment difference or a context factor you haven't identified.

Fix: Note contradictions and probe them. "I've heard some people say X and others say Y. What do you think explains the difference?"

Mistake 4: Treating All Feedback Equally

Not all pain is equal. A minor annoyance mentioned once is different from a dealbreaker mentioned by every participant.

Fix: Track frequency and intensity. Pain that appears across multiple participants with high emotional charge is more important than pain mentioned once in passing.

Mistake 5: Skipping the Adoption Question

Understanding the problem isn't enough. You also need to understand what would prevent people from adopting a solution.

Fix: Always ask Q7: "What would make you hesitant to try a new solution, even if it promised to help?" This surfaces trust requirements, switching costs, and dealbreakers.

What Good Output Looks Like

At the end of discovery research, you should have:

A pain taxonomy Named categories of pain with supporting quotes. Not "users are frustrated" but "users experience five distinct frustrations: A, B, C, D, and E."

Quantified impact Specific numbers for time cost, money cost, frequency, and severity. These become your ROI claims.

Competitive landscape What people currently use, what works, what doesn't, and why previous solutions were abandoned.

Magic wand priorities The single thing participants would fix if they could. This is often your core value proposition.

Adoption barriers What would prevent people from switching, even to a superior solution. These become design requirements.

Representative quotes Verbatim language that captures the emotional reality. These appear in marketing, sales, and investor materials.

Example Output: Auto Parts Discovery

Pain Taxonomy: 1. Ghost inventory (parts listed as available that aren't) 2. VIN split errors (wrong part for vehicle) 3. Time sink (10-20 hours/week on sourcing) 4. Unpredictable ETAs (can't quote customers confidently) 5. No integration (manual entry between systems) Quantified Impact: - Time cost: 10-20 hours/week per shop - Revenue impact: $400-800/week in lost profit - Frustration rating: 8.3/10 average Competitive Landscape: - Phone calls to suppliers (primary method, slow but reliable) - Parts catalog websites (outdated inventory data) - OEM portals (fragmented, clunky interfaces) Past Failures: - Apps with fake inventory data - Systems that didn't integrate with shop management - Tools with unreliable VIN lookups Magic Wand Priority: "Real, guaranteed ETAs with tracking at quote time, VIN-locked" Key Insight: The problem isn't speed. It's trust. Shops need to quote customers with confidence.

How This Feeds Into the Next Stage

Discovery research produces rich, qualitative understanding. The next stage, User Segmentation, takes this understanding and asks: "Are there distinct groups of users who experience this problem differently?"

You might discover that some auto shops prioritise speed while others prioritise accuracy. Some caregivers handle routine coordination while others face crisis situations. These segment differences inform product tiering, marketing messaging, and go-to-market strategy.

Discovery gives you depth. Segmentation gives you structure.

Explore the Research

The studies referenced in this article are publicly available:

These studies demonstrate how discovery research surfaces the specific, named pain points that shape product decisions.

This is article 2 of 12 in the Product Manager's Research Toolkit series. Previously: Problem Framing. Next: User Segmentation - finding the distinct groups within your market.

Ready to run discovery research on your problem space? FishDog lets you understand customer pain in hours, not weeks. Learn more at fish.dog

Frequently Asked Questions

What is discovery research in product management?

Discovery research is the process of understanding a validated problem in depth. While problem framing confirms a problem exists, discovery research explores how often it occurs, what triggers it, how people currently cope, what they have tried before, and what would prevent them from adopting a new solution. It is fundamentally qualitative, requiring stories and narratives rather than survey data.

How does the 7-question discovery framework work?

The 7-question framework progresses through five dimensions: Q1 establishes relevance through recent examples, Q2 surfaces emotional pain, Q3 quantifies impact in time or money, Q4 maps current tools and workarounds, Q5 explores past solution attempts, Q6 reveals priorities via a 'magic wand' question, and Q7 identifies adoption barriers. Each question invites narrative responses to produce qualitative depth.

How can synthetic research replace traditional user interviews for discovery?

Synthetic research platforms like Ditto provide demographically-filtered AI personas that respond to open-ended questions with interview-quality depth and nuance. A discovery study that would take weeks to schedule, conduct, and synthesise with real participants can be completed in hours. The personas are built on census data and validated at 95% correlation with traditional methods.

What is the difference between discovery questions and validation questions?

Validation questions test a specific hypothesis with closed-ended formats, such as 'Would you pay $50/month for a meal planning service?' Discovery questions explore territory with open-ended prompts, such as 'Walk me through what happens when you try to plan healthy meals for your family. What makes it difficult?' Discovery produces the rich qualitative data needed for product decisions.

How many personas should a discovery research study include?

For discovery research, 10-15 synthetic personas typically provides enough diversity to surface meaningful patterns without overwhelming analysis. The key is precise targeting rather than volume. Include variation in demographics, experience levels with the problem, and exposure to existing solutions. Ten highly-targeted personas produce more actionable insights than fifty loosely-matched ones.

Related Articles

Ready to Experience Synthetic Persona Intelligence?

See how population-true synthetic personas can transform your market research and strategic decision-making.

Book a Demo