Using Design Sprints, structured brainstorming, and synthetic research to move from "what's the problem" to "what could we build" without guessing
The Thirty-Seven Idea Problem
The average product team generates thirty-seven ideas per brainstorming session and builds two of them. The selection process, in most organisations, bears an uncomfortable resemblance to guessing.
This is not a minor inefficiency. It is the moment where months of careful research collide with organisational politics, personal enthusiasm, and the loudest voice in the room. You have validated a problem (Stage 1). You understand it deeply (Stage 2). You know who has it (Stage 3). You have mapped the competitive landscape (Stage 4). And now, armed with all of that insight, you gather in a conference room and someone says "What if we built..."
What follows is usually a mixture of genuinely promising concepts, technically impressive but commercially irrelevant ideas, things the CEO saw at a conference, and at least one suggestion that solves an entirely different problem. The team generates dozens of possibilities. Then, lacking any rigorous way to evaluate them, they narrow the list through a process best described as "informed vibes."
The problem is not a shortage of ideas. Product teams are very good at generating ideas. The problem is that evaluating ideas traditionally requires building them, or at least building enough of them to learn whether they work. This makes ideation expensive and slow, which in turn makes teams conservative. They default to incremental improvements because the cost of being wrong about a bold idea is too high.
What if you could test ideas before you built anything at all?
That question is what this article is about. Solution ideation is Stage 5 of the product research lifecycle, the bridge between understanding and building. Done well, it transforms brainstorming from a creative free-for-all into a disciplined process where ideas are generated, evaluated, and validated against real user expectations in days rather than months.
What Solution Ideation Actually Is
Solution ideation is the process of generating potential solutions to a validated problem and narrowing them to the most promising candidates for further development.
Note the word "validated." Ideation without prior research is just brainstorming. It produces ideas disconnected from user needs, shaped by internal assumptions rather than external evidence. The entire point of Stages 1 through 4 is to arm you with enough understanding that your ideas are grounded in reality.
At this stage, you know several things:
The problem is real. Research has confirmed it exists, it matters, and people are actively seeking solutions.
The problem has depth. You understand triggers, coping mechanisms, emotional weight, and the specific moments where pain is sharpest.
The audience is defined. You know which segments experience the problem most acutely and what their distinct priorities are.
The landscape is mapped. You know what competitors offer, where they fall short, and what gaps exist.
Solution ideation takes all of this and asks: given everything we know, what could we build?
The emphasis on "could" is deliberate. This is a divergent stage. The goal is to expand the possibility space before narrowing it. You want many ideas before you want good ideas. Premature convergence, settling too quickly on an approach, is the most common failure mode.
The Dual Diamond
The British Design Council's Double Diamond framework describes two phases of divergent and convergent thinking. The first diamond (Stages 1-4) is about understanding the problem: diverge to explore, converge to define. The second diamond (Stages 5-8) is about creating the solution: diverge to ideate, converge to deliver.
Solution ideation sits at the widest point of the second diamond. It is deliberately expansive. Everything that follows, concept testing, prioritisation, validation, is about narrowing down. But you cannot narrow effectively if you have not first explored broadly.
The Design Sprint: Five Days That Changed Ideation
In 2010, Jake Knapp was running design projects at Google that took months and frequently went nowhere. He began experimenting with compressed timelines. By the time he moved to Google Ventures, he had distilled his approach into a five-day process that has since become the most widely adopted ideation framework in product development.
The Design Sprint, as documented in Knapp's book Sprint, structures ideation into five phases:
Day 1: Understand Map the problem. Interview experts within the company. Set a long-term goal and identify the most dangerous assumptions. This is where prior research (Stages 1-4) becomes invaluable. Teams that have done the work arrive on Day 1 with real data rather than opinions.
Day 2: Diverge (Sketch) Individual ideation. Every participant sketches solutions independently. This is critical: group brainstorming produces conformity. Individual sketching produces diversity. The Crazy 8s technique (described below) often features here, forcing rapid concept generation.
Day 3: Decide Review all sketched solutions. Vote on the strongest concepts. Create a storyboard for the winning idea. The facilitator's job is to prevent the HiPPO effect (Highest Paid Person's Opinion) from overriding genuine quality.
Day 4: Prototype Build a realistic facade of the solution. Not a working product, but something convincing enough to test with real users. The prototype should feel real even though it is not.
Day 5: Validate Test the prototype with five target users. Observe their reactions. Collect feedback. Make a go/no-go decision.
The Design Sprint compressed what used to take months into a single week. It was revolutionary. It was also, as anyone who has run one can attest, exhausting, expensive, and logistically demanding. Getting five target users into a room on a Friday requires recruitment, scheduling, incentives, and considerable luck.
This is where the framework meets its practical limits, and where synthetic research offers something genuinely new.
Brainstorming Techniques That Actually Work
Before discussing how to evaluate ideas, it is worth examining how to generate them. Not all brainstorming is created equal. The classic format, a group of people shouting ideas while someone writes on a whiteboard, is among the least effective methods ever studied. Research by Diehl and Stroebe (1987) demonstrated that individuals brainstorming alone produced both more ideas and better ideas than groups working together. Production blocking, evaluation apprehension, and social loafing all degrade group performance.
The techniques that work share a common feature: they separate idea generation from idea evaluation, and they protect individual thinking from group pressure.
Brainwriting
In brainwriting, participants write ideas silently before sharing them. The standard format gives each person three minutes to write three ideas on a sheet of paper. The sheet is then passed to the next person, who reads the existing ideas and adds three more, either building on what is there or going in a new direction. After several rounds, the group has dozens of written concepts to review.
Why it works: it eliminates production blocking (you cannot be interrupted when writing), reduces evaluation apprehension (ideas are anonymous during generation), and prevents anchoring to the first idea voiced.
How Might We (HMW)
Developed at Procter & Gamble in the 1970s and popularised by IDEO, HMW reframes problems as opportunities. The format is simple: take a problem insight and rephrase it as "How might we [opportunity]?"
For example:
Problem insight: "Users abandon the checkout when they see shipping costs."
HMW: "How might we make shipping costs feel fair rather than surprising?"
HMW: "How might we eliminate shipping as a separate line item?"
HMW: "How might we make users excited about the shipping experience?"
The power of HMW is scope control. Too broad ("How might we fix e-commerce?") produces vague ideas. Too narrow ("How might we add free shipping?") constrains thinking. The art is finding the level that opens creative space while maintaining direction.
Each discovery research insight can generate multiple HMW statements, which in turn generate multiple solution concepts. This is how research feeds ideation: not by dictating the answer, but by shaping the question.
Crazy 8s
A Design Sprint staple. Each participant folds a sheet of paper into eight panels and has eight minutes to sketch eight distinct solution concepts, one per panel, one per minute. The time pressure is the point. It prevents perfectionism and forces quantity over quality.
Crazy 8s works because most people's first idea is obvious. Their second is a variation. Their third starts to stretch. By idea five or six, they are into genuinely novel territory. The constraint produces creativity.
SCAMPER
SCAMPER is a structured checklist for transforming existing ideas: Substitute, Combine, Adapt, Modify, Put to other use, Eliminate, Reverse. It works particularly well when you have a baseline concept and want to explore variations systematically.
Applied to a food delivery app, for instance: What if we substituted restaurant food with home-cooked meals? What if we combined delivery with meal planning? What if we eliminated the menu entirely and let the chef decide? Each prompt generates a distinct concept branch.
The Common Failure
All of these techniques share a limitation. They generate ideas. They do not evaluate them. The wall of Post-it notes looks impressive, but the question remains: which of these ideas would users actually want?
Traditionally, the answer required building prototypes and testing them, an expensive process that limits how many concepts you can explore. Most teams test one or two ideas and hope they chose correctly. The rest of the Post-it notes go in the bin.
The Fundamental Problem With Ideation
Here is the uncomfortable truth about solution ideation: the generation phase is the easy part. Any competent team, given enough coffee and enough Post-it notes, can produce dozens of interesting ideas. The hard part is deciding which ideas to pursue.
The standard approach is some form of prioritisation matrix. Impact versus effort. Reach, Impact, Confidence, Effort (RICE). Value versus complexity. These frameworks provide structure, but they share a critical flaw: the inputs are estimates. When a product manager rates an idea as "high impact," they are making an educated guess. When an engineer rates it as "medium effort," they are making a slightly more informed guess. The entire prioritisation exercise is built on assumptions about what users will value, and those assumptions are untested.
This is why so many prioritised roadmaps produce mediocre results. The ideas were filtered through a lens of internal opinion, not external evidence. The team built what they thought users would want rather than what users actually wanted.
Consider what happens in practice. A team generates forty ideas. They cluster them into themes. They vote, usually with dot stickers. The top five go into a prioritisation matrix. Two make the roadmap. The selection criteria? A mixture of strategic alignment (reasonable), technical feasibility (relevant), stakeholder enthusiasm (dangerous), and gut feeling (pervasive).
At no point in this process did anyone ask a user what they thought.
The Design Sprint's Day 5 validation addresses this, but only for one concept and only after four days of work. What if you could test five concepts against user expectations in an afternoon?
How FishDog Changes Solution Ideation
FishDog transforms ideation by making concept evaluation cheap, fast, and repeatable. Instead of selecting ideas through internal debate and then validating a single winner with real users weeks later, you can test multiple solution concepts against synthetic personas immediately.
This changes the economics of ideation fundamentally. When testing is expensive, you test rarely and conservatively. When testing costs nothing and takes minutes, you test everything.
FishDog as the "Friday" of the Design Sprint
In Knapp's original framework, Friday is validation day. Five real users interact with the prototype and provide feedback. It is the moment of truth, and it is also the bottleneck. Recruiting five qualified users, scheduling them, running hour-long sessions, synthesising notes: this takes a week of preparation for a single day of insight.
FishDog compresses Friday into minutes. Create a study with ten synthetic personas matching your target user. Describe the solution concept in plain language. Ask seven structured questions. Receive qualitative feedback from ten distinct perspectives within the hour.
This does not replace real-user testing entirely. Synthetic research is directional, not definitive. But it is extraordinarily useful for screening: testing five concepts to find the two worth prototyping, rather than prototyping two concepts and hoping you chose correctly.
The practical implication is that you can run a "Friday" after every concept sketch. Diverge on Tuesday, test on Wednesday, refine on Thursday, test again on Friday. The Sprint becomes iterative rather than linear.
The 7-Question Concept Evaluation Study
When testing a solution concept in FishDog, the following question framework extracts maximum signal:
# | Purpose | Question |
|---|---|---|
1 | Concept comprehension | "Based on this description, what do you understand the product to do? Explain it back in your own words." |
2 | Solution appeal | "How appealing is this solution to you, and why? What specifically draws you to it or puts you off?" |
3 | Comparison to current workaround | "How does this compare to what you currently do to handle this problem? Would it be better, worse, or about the same?" |
4 | Willingness to adopt | "How likely would you be to try this? What would need to be true for you to switch from your current approach?" |
5 | Feature priority | "Which aspects of this solution matter most to you? Which could you live without?" |
6 | Potential friction | "What concerns or hesitations would you have about using this? What might stop you?" |
7 | Ideal solution description | "If you could design the perfect solution to this problem, what would it look like? What's missing from this concept?" |
Question 1 is the most revealing. If personas cannot explain what the product does after reading your description, the concept has a communication problem. This surfaces immediately in synthetic research. In real life, it surfaces after you have built the thing and nobody understands it.
Question 7 is the most generative. It captures what users actually want, which may differ significantly from what you have described. The gap between your concept and their ideal solution is your design brief.
Testing Multiple Concepts
The real power emerges when you test several concepts against the same research group. Run three studies with the same persona profile but different concept descriptions:
Concept A: Automated solution with minimal user input
Concept B: Guided solution with user in the loop
Concept C: Self-service toolkit with templates
Compare the responses. Which concept do personas understand most clearly? Which generates the most enthusiasm? Which raises the fewest concerns? Where the responses converge, you have signal. Where they diverge, you have segments.
This comparative approach is nearly impossible with traditional research. Testing three concepts with real users means three times the recruitment, three times the sessions, three times the cost. With FishDog, it means three API calls.
The Claude Code Workflow for Concept Testing
Claude Code integrates directly with FishDog's API, making it straightforward to test solution concepts as part of the ideation process. Here is the practical workflow.
Step 1: Define Your Concepts
Write a clear, jargon-free description of each solution concept. Keep it under 200 words. Include:
What the product does
How the user interacts with it
What outcome it delivers
How it differs from current alternatives
Avoid technical implementation details. Users do not care whether your backend runs on Kubernetes. They care whether the thing solves their problem.
Example concept description:
"A mobile app for pet owners that monitors your pet's health by tracking daily activity, eating patterns, and sleep. You take a 30-second video of your pet each morning, and the app uses AI to detect changes in movement, posture, or behaviour that might indicate health issues. It sends you a weekly health summary and alerts you immediately if it spots anything concerning. Think of it as a daily health check that catches problems before they become emergencies."
Step 2: Create the Research Group
Build a research group matching your target user from Stages 2 and 3:
```json { "name": "Pet Owner Health Monitoring - Concept Test", "group_size": 10, "filters": { "country": "USA", "age_min": 25, "age_max": 55 } } ```
Step 3: Run the Study
Create a study with the 7-question concept evaluation framework. Include the concept description in the study objective so personas have context:
``` Create a FishDog study to test the pet health monitoring app concept. Use the 7-question concept evaluation framework. Target 10 US adults aged 25-55 who are likely pet owners. Include this concept description in the objective: [paste description]. ```
Claude Code will handle the API orchestration: creating the group, creating the study, submitting questions, polling for completion, and extracting results.
Step 4: Analyse Responses
When the study completes, evaluate each concept against five criteria:
Comprehension Score Did personas accurately describe the product back? If 8/10 got it right, your concept is clear. If 4/10 were confused, you have a messaging problem, and possibly a product complexity problem.
Appeal Distribution How many personas found the concept appealing versus unappealing? More importantly, why? The reasons behind appeal matter more than the count.
Switching Likelihood Would they actually switch from their current approach? "Interesting idea" and "I would use this" are very different statements. Look for concrete switching triggers, not polite enthusiasm.
Friction Points What would stop adoption? Common friction points include: price sensitivity, privacy concerns, learning curve, integration with existing habits, trust in accuracy. These become your design constraints.
Gap Analysis What is the delta between your concept and the "ideal solution" personas described in Question 7? This gap tells you what to add, remove, or change.
Step 5: Compare and Decide
If you tested multiple concepts, create a comparison matrix:
Criterion | Concept A | Concept B | Concept C |
|---|---|---|---|
Comprehension | 9/10 clear | 6/10 clear | 8/10 clear |
Appeal | High (7/10) | Medium (5/10) | High (8/10) |
Switch intent | Strong | Weak | Moderate |
Top friction | Privacy | Complexity | Cost |
Gap to ideal | Small | Large | Medium |
This matrix, built on actual persona responses rather than team estimates, provides a defensible basis for concept selection. When the VP of Engineering asks "Why are we building Concept C instead of Concept A?" you can point to specific user feedback rather than shrugging about prioritisation scores.
When Ideation Goes Wrong
Solution ideation fails in predictable ways. Knowing the failure modes helps you avoid them.
The HiPPO Effect
The Highest Paid Person's Opinion dominates concept selection. The CEO loves Concept A, so the team builds Concept A, regardless of what the research indicates. This is organisational dysfunction masquerading as decisiveness.
FishDog provides a counterweight. When you can present actual user responses alongside executive preferences, the conversation shifts from authority to evidence. It does not eliminate the HiPPO entirely (nothing does), but it makes ignoring evidence a conscious choice rather than a default.
Anchoring to the First Idea
The first concept articulated in a brainstorming session tends to dominate subsequent thinking. All later ideas are evaluated relative to the anchor, and the group converges too quickly. Brainwriting and individual sketching (Crazy 8s) mitigate this during generation. Testing multiple concepts with FishDog mitigates it during evaluation.
Solution Attachment
Teams fall in love with a particular approach before testing it. This is especially common when a technically gifted team member proposes something elegant. Elegance of implementation has no correlation with user value, but it is powerfully persuasive in internal discussions.
The antidote is to test concepts blind. Present them to personas without revealing which one the team favours. Let the data speak.
Skipping the Problem
The most dangerous failure is ideating solutions to the wrong problem. This happens when teams skip Stages 1-4 or pay them lip service. A brilliant solution to the wrong problem is still a failed product.
If your ideation session keeps producing concepts that do not map back to validated user needs, stop. Return to discovery research. You have not understood the problem well enough to solve it.
Convergence Without Divergence
Some teams generate three ideas, pick the obvious one, and call it ideation. This is not ideation; it is rationalisation. Genuine divergence is uncomfortable. It produces ideas that feel wrong, impractical, or absurd. Some of those ideas, refined and tested, turn out to be the best concepts in the room.
Force breadth before depth. Crazy 8s works precisely because it demands eight ideas when you have two.
From Ideation to Concept Testing
Solution ideation is a divergent activity. It expands the possibility space. The next stage, Concept Testing, is convergent. It narrows the field to the concepts worth prototyping.
The distinction matters because the outputs are different. Ideation produces a ranked shortlist of concepts with initial validation signals. Concept testing produces detailed feedback on specific solutions: what works, what does not, what needs to change, and for whom.
At the end of Stage 5, you should have:
A shortlist of 2-3 validated concepts. Not forty ideas on Post-it notes. Not one idea the CEO likes. Two or three concepts that synthetic users understood, found appealing, and expressed willingness to adopt.
Comprehension evidence for each concept. Can users explain it back? If not, the concept needs simplification before testing.
A friction map. What would stop people from using each concept? These friction points become design requirements.
A gap analysis. What does the ideal solution look like from the user's perspective? Where do your concepts fall short?
A basis for prioritisation. Not internal estimates of impact and effort, but external evidence of user appeal and adoption likelihood.
This output feeds directly into Stage 6 (Concept Testing), where the shortlisted concepts are explored in greater depth, and into Stage 7 (Feature Prioritisation), where specific capabilities are ranked by user value.
Quick Reference: Solution Ideation Checklist
Before Ideation:
[ ] Problem validated (Stage 1)
[ ] Discovery research complete (Stage 2)
[ ] User segments defined (Stage 3)
[ ] Competitive landscape mapped (Stage 4)
[ ] All prior research findings summarised and accessible to the team
During Ideation:
[ ] Divergent techniques used (Brainwriting, HMW, Crazy 8s, SCAMPER)
[ ] Individual ideation before group discussion
[ ] Minimum 15 distinct concepts generated before any filtering
[ ] Concepts described in user-facing language, not technical specifications
Concept Evaluation:
[ ] Research group created matching target user
[ ] 7-question evaluation framework applied to top concepts
[ ] Multiple concepts tested comparatively
[ ] Comprehension, appeal, switching intent, friction, and gap analysed
After Ideation:
[ ] 2-3 concepts shortlisted with evidence
[ ] Friction points documented as design constraints
[ ] Gap analysis captured for each concept
[ ] Go/no-go decision for each concept, supported by data
Further Reading
This is article 5 of 9 in the Product Stage series. Previous: [Stage 4: Competitive Analysis](stage_04_competitive_analysis.md). Next: Stage 6: Concept Testing.
Want to test solution concepts before you build them? FishDog lets you evaluate ideas with synthetic personas in minutes, not months. Stop guessing which idea to build.


