← Back to Articles

How to Build Customer Advocacy with Claude Code and Ditto

Customer Advocacy Illustration

The most effective customer evidence does not come from asking customers to say nice things about you. It comes from research that happens to reveal nice things about you. The distinction is subtle, but it is the difference between a testimonial and a finding, between marketing and proof, between asking for a favour and earning a compliment. One of these carries weight. The other carries the faint odour of choreography.

The Uncomfortable Economics of Customer Evidence

Customer advocacy, as traditionally practised, is one of the more awkward transactions in B2B marketing. The process is well understood: identify a happy customer, approach them with a request, negotiate the terms of a case study or testimonial, coordinate schedules for an interview, draft the content, submit it for legal review on their side, incorporate the revisions that remove everything interesting, and publish a document that reads like it was written by a committee -- because it was.

The economics are worse than they appear. A single customer case study costs between $5,000 and $15,000 when produced by an agency, or approximately 40 hours of internal time when produced in-house. The elapsed time from initial request to published asset is typically six to twelve weeks. The approval rate -- the percentage of customers who agree to participate when asked -- hovers around 20% in most B2B categories. Which means that for every case study you publish, you have asked five customers and been declined by four of them. Each decline is a small withdrawal from the relationship account, a reminder that you are asking your customer to do unpaid marketing labour on your behalf.

The result is that most companies have a chronic shortage of customer evidence. They have three case studies from 2023, a handful of G2 reviews of varying quality, and a slide in the sales deck that says "Trusted by leading brands" above a collection of logos obtained through licensing agreements that technically permit display but do not imply endorsement. The sales team asks for more proof points. The marketing team explains the pipeline. The customer success team, caught in the middle, promises to "flag some candidates" at the next QBR cycle.

This is not a process problem. It is a structural problem. The traditional model requires customers to volunteer their time, their brand, and their internal results for your commercial benefit. The incentive asymmetry is obvious. They bear the cost (time, legal review, reputational risk if the case study ages poorly) and you capture the value (sales enablement, demand generation, credibility). Reasonable customers, rationally assessing the exchange, often decline. The surprise is not that 80% say no. The surprise is that 20% say yes.

The question worth asking is whether customer evidence must, by definition, require customer participation. The answer, it turns out, is no.

The Study Is the Evidence

Here is the insight that reframes the entire advocacy problem: when you run a research study that validates your product's value proposition, the findings themselves constitute customer evidence. Not customer-sourced evidence in the traditional sense. Research-sourced evidence that speaks to the same questions a case study would answer -- does this product solve a real problem, do people in this category need it, would they choose it, what do they value about it -- but generated through structured research rather than choreographed interviews.

Consider what a good case study actually contains. Strip away the boilerplate and the "challenge, solution, results" framework, and you are left with three types of evidence: (1) confirmation that the problem the product solves is real and painful, (2) specific ways the product delivered value, and (3) quotable language from someone who experienced that value. A well-designed Ditto study produces all three. The personas confirm the problem exists. Their responses describe how a solution like yours would change their workflow. Their language, when they articulate frustration or relief or preference, is quotable in exactly the way a testimonial is quotable -- except that it comes from a research context rather than a marketing one.

The credibility dynamic is different, and arguably stronger. A testimonial from "Sarah, VP of Marketing at Acme Corp" is understood by every reader to be a curated endorsement. Sarah was selected because she is happy. Her quote was edited for impact. The reader applies an implicit discount. Research findings, by contrast, carry the credibility of methodology. When a study of ten synthetic consumers in your target demographic reveals that eight of them would switch from their current solution to one with your product's characteristics, that is a data point, not a favour. When those same consumers articulate, in their own synthetic-but-demographically-calibrated language, why they find the current alternatives inadequate, that is market evidence, not marketing copy.

This does not make traditional case studies worthless. A named customer at a recognisable company remains one of the most powerful trust signals in B2B sales. But it does mean that the chronic evidence shortage most companies experience is at least partly self-inflicted. They have been fishing in one pond -- the pond of willing customer volunteers -- while ignoring the ocean of evidence that structured research can produce on demand.

The Evidence Flywheel

The relationship between research and advocacy is not linear. It is circular, and the circular version is considerably more powerful than the linear one.

The linear model works like this: run a study, extract some useful quotes, put them on the website, move on to the next thing. This is better than having no evidence at all, but it treats research as a one-time extraction exercise. The study produces a fixed quantity of evidence, that evidence is deployed, and it gradually becomes stale.

The flywheel model works differently. Each research study produces insights. Those insights become content -- blog articles, social posts, sales collateral, conference presentations. That content demonstrates your company's understanding of the market, which itself functions as advocacy. Prospects who encounter your research think, "These people understand my problem," which is the most powerful form of advocacy there is. The content generates inbound interest, which informs the next round of research, which produces the next round of insights.

The flywheel has four stages:

Research. Run a Ditto study against a specific audience segment, product category, or market question. The study is designed not merely to answer an internal question but to produce outputs that are externally valuable -- findings that your prospects would find interesting whether or not they ever buy your product.

Insights. Extract the quotable findings, the data points, the competitive comparisons, the unexpected results. These are not buried in a research report that goes to the product team. They are treated as raw material for the content engine.

Content. Transform insights into published assets: blog articles, infographics, social media posts, sales one-pagers, webinar talking points. Each asset carries the credibility of research provenance. It is not your opinion that the market thinks X. It is what your research found.

Advocacy. The published content creates two advocacy effects simultaneously. First, it provides sales enablement material that is more credible than self-authored claims. Second, it positions your company as a source of market intelligence, which attracts the kind of inbound attention that no amount of outbound case study recruitment can match. Prospects begin to associate your brand with understanding their market, which is the highest form of customer evidence -- not "our customers love us" but "we understand our customers' world."

The flywheel accelerates because each cycle generates assets that feed the next. A blog article about consumer attitudes towards sustainable packaging attracts readers who are themselves interested in sustainable packaging. Those readers become prospects. Their questions and objections inform the next study. That study produces the next article. And so on. The compound effect, over a few quarters, produces a library of research-backed evidence that no traditional advocacy programme could match in volume, freshness, or credibility.

Six Types of Evidence from Ditto Studies

Not all evidence is created equal, and not all of it serves the same purpose. A well-designed study produces six distinct types of evidence, each useful in different contexts.

1. Quotable Persona Responses

The most immediately usable output. When a synthetic persona says, "I would switch providers tomorrow if someone offered real-time analytics instead of weekly reports," that sentence can appear in a sales deck, a landing page, a pitch email, or a conference presentation. It is not a testimonial -- it does not carry a real person's name -- but it is a representative voice expressing a genuine market sentiment. The volume advantage is significant. A single Ditto study with ten personas and seven questions produces up to seventy individual responses, many of which contain quotable language. A traditional case study interview, after editing and approval, might yield three usable quotes.

2. Quantifiable Data Points

"Eight out of ten respondents ranked real-time analytics as their top priority." "Six of ten personas said they would pay a premium for automated compliance reporting." These aggregated findings translate directly into the kind of statistical evidence that sales decks rely on. They are not nationally representative survey data, and should not be presented as such. They are structured research findings from a calibrated synthetic panel, which is a defensible and increasingly well-understood methodology.

3. Competitive Preference Data

When a study includes questions about competitive alternatives -- "How does Product A compare to Product B?" or "What would make you switch from your current provider?" -- the responses produce competitive evidence that is extraordinarily difficult to obtain through traditional means. Asking your own customers to compare you favourably to competitors is awkward. Asking synthetic personas who represent your target market to evaluate the competitive landscape is research.

4. Problem Validation

Before you can advocate for your solution, you need evidence that the problem exists and that people care about it. Ditto studies routinely surface the language people use to describe their pain points, the workarounds they have adopted, and the frustrations they experience with the status quo. This problem-validation evidence is the foundation of all advocacy. It is not "our product is great." It is "the problem our product solves is real, urgent, and widely felt."

5. Use Case Discovery

Personas sometimes describe use cases that the product team had not considered. A study designed to validate a project management tool might reveal that respondents are most excited about its potential for client communication, not internal task tracking. These unexpected use cases become advocacy material precisely because they are unexpected. They signal that the research was genuine, not scripted, and that the product's value extends beyond the obvious positioning.

6. Objection Anticipation

Not all study responses are positive, and the negative ones are surprisingly useful for advocacy. When a persona raises a concern -- "I would worry about data security" or "I am not sure my team would adopt this" -- you have advance intelligence about the objections your sales team will encounter. Addressing those objections proactively, with evidence that they were anticipated and considered, is a more powerful form of advocacy than ignoring them.

Designing Studies That Produce Advocacy-Ready Outputs

The difference between a study that produces internally useful insights and one that also produces externally deployable evidence is largely a matter of question design. The same research can serve both purposes, but only if the questions are crafted with dual intent.

The Principle of Publishable Questions

Before including a question in a study, apply the publishable question test: would the answer to this question be interesting to someone who is not employed by your company? If yes, the question serves both research and advocacy purposes. If no, it may still be useful for internal decision-making, but it will not generate external evidence.

"How important is real-time analytics in your daily workflow?" passes the test. A product manager at any company in the category would find the answer interesting. "Would you use our product if it had real-time analytics?" fails the test. It is self-referential. The answer is interesting only to you.

The Seven-Question Advocacy Study

The following question framework is optimised for dual-purpose output. Each question serves an internal research function and produces at least one type of externally deployable evidence.

Question 1: Problem Severity "What is the single biggest frustration you experience with [category/workflow] today?"

Internal function: validates problem-solution fit. External evidence: quotable pain-point language, problem validation data.

Question 2: Current Solution Assessment "How do you currently handle [task]? What works and what does not?"

Internal function: competitive intelligence. External evidence: workaround descriptions, competitive comparison data, status quo dissatisfaction metrics.

Question 3: Ideal Outcome "If you could wave a magic wand and fix one thing about how you [task], what would it change?"

Internal function: feature prioritisation. External evidence: aspirational language that maps to your value proposition, use case discovery.

Question 4: Feature Reaction "We are testing a solution that [key capability description]. What is your initial reaction?"

Internal function: concept validation. External evidence: quotable enthusiasm (or constructive scepticism), adoption likelihood data.

Question 5: Competitive Preference "If you were choosing between [your approach] and [competitor approach], what would matter most in your decision?"

Internal function: competitive positioning. External evidence: decision criteria data, competitive preference indicators.

Question 6: Value Quantification "How much time or money do you estimate you lose each month to [problem your product solves]?"

Internal function: pricing and ROI modelling. External evidence: cost-of-inaction data points, urgency validation.

Question 7: Recommendation Likelihood "If a colleague asked you about solutions for [category], what would you tell them matters most?"

Internal function: word-of-mouth drivers. External evidence: peer recommendation language, buying criteria hierarchy.

The critical design decision is that none of these questions mention your product by name (with the partial exception of Q4, which describes a capability without naming the provider). This is deliberate. Evidence that appears to come from an objective evaluation of the market is more credible than evidence that appears to come from a product-specific inquiry. The prospect reading your blog article or sales deck does not need to know that the study was commissioned by your company. They need to know that the findings are methodologically sound and relevant to their situation.

Extracting Testimonial-Grade Quotes

Raw study responses are research data. Testimonial-grade quotes are a specific subset of that data, selected and presented for maximum advocacy impact. The extraction process is straightforward but benefits from a systematic approach.

Step 1: Identify High-Signal Responses

Not every response is quotable. The best advocacy quotes share three characteristics: they are specific (they describe a concrete experience or preference, not a vague sentiment), they are emotionally resonant (they convey frustration, enthusiasm, or surprise rather than neutral assessment), and they are self-contained (they make sense without requiring the reader to know the question that prompted them).

From a typical ten-persona study, expect to find between eight and fifteen responses that meet all three criteria. This is, it bears noting, roughly five times the yield of a traditional customer interview, which after editing and approval typically produces two to four usable quotes.

Step 2: Categorise by Use Case

Sort the quotable responses into categories that map to your content and sales needs. Common categories include: problem validation quotes (useful for early-stage awareness content), solution preference quotes (useful for consideration-stage content), competitive comparison quotes (useful for sales battlecards), and value articulation quotes (useful for ROI conversations and pricing pages).

Step 3: Attribute Appropriately

Synthetic persona quotes should be attributed to the persona's demographic profile, not to a fictional individual. "Marketing director, mid-market SaaS, 8 years' experience" is an honest attribution. "Jane Smith, VP Marketing at TechCo" is fabrication. The distinction matters for credibility. Research findings attributed to representative profiles carry methodological weight. Findings attributed to invented individuals carry none.

Step 4: Contextualise with Data

A quote is more powerful when accompanied by a data point. "I would switch providers tomorrow for real-time analytics" is good. "Seven of ten respondents prioritised real-time analytics, with one noting: 'I would switch providers tomorrow for this capability'" is better. The combination of quantitative evidence and qualitative colour is precisely what makes research-based advocacy more credible than traditional testimonials.

Publishing Research as Advocacy Content

The evidence extracted from Ditto studies has a natural publishing pathway that transforms internal research into external advocacy. The workflow follows the pattern established in the research study content workflow: study findings become blog articles, which become social media content, which become sales collateral, which become the evidence library that your sales and marketing teams draw from continuously.

Blog Articles as Evidence Repositories

Each research study can produce a 1,000 to 2,500-word blog article that presents the findings as market intelligence rather than product marketing. The article format matters. A post titled "Why 80% of Marketing Directors Want Real-Time Analytics" is market intelligence. A post titled "Why Our Product's Real-Time Analytics Are the Best" is marketing. The former attracts organic traffic, earns social shares, and builds the credibility that underpins all advocacy. The latter does none of these things.

The article structure that works best for advocacy purposes is: hook (the most surprising finding), methodology note (brief, establishing credibility), detailed findings (with quotable data and persona responses), implications (what this means for the reader's decisions), and a connection to your product (brief, positioned as one possible response to the findings rather than the inevitable conclusion). This structure respects the reader's intelligence while still serving commercial objectives.

Social Media as Evidence Distribution

The most quotable findings from each study are natural social media content. A data point with context -- "We asked 10 marketing directors about their analytics tools. 8 of them described their current reporting as 'retrospective at best.'" -- performs well on LinkedIn because it offers the reader genuine insight rather than a promotional claim. Over time, a consistent stream of research-backed social posts builds an audience that associates your brand with market understanding.

Sales Collateral as Evidence Application

The sales team's need for customer evidence is immediate and specific. They need proof points for particular objections, particular use cases, particular industries. The evidence library built from Ditto studies can be organised by these dimensions: evidence by objection (data and quotes that address each common sales objection), evidence by use case (findings that validate each primary use case), and evidence by persona (research relevant to each buyer type). This taxonomy turns the evidence flywheel into a practical sales tool.

The Claude Code Workflow for Evidence Extraction

For teams using Claude Code, the evidence extraction process can be executed as a structured workflow that takes a completed Ditto study and produces a categorised evidence library.

Step 1: Run the advocacy-optimised study. Using the seven-question framework described above, create a Ditto study through Claude Code. Specify the target audience demographic, the competitive alternatives for Q5, and the capability description for Q4. The study runs in under an hour.

Step 2: Extract and categorise evidence. Once the study completes, instruct Claude Code to review all persona responses and extract evidence in six categories: quotable responses, data points, competitive preferences, problem validation, use cases, and objections. Each piece of evidence is tagged with its source question and persona profile.

Step 3: Generate advocacy assets. From the categorised evidence, Claude Code can produce multiple output formats in a single session: a blog article draft following the market intelligence structure, a set of social media posts highlighting key findings, a sales one-pager with the strongest proof points, and an updated evidence library document that adds the new findings to the existing collection.

Step 4: Publish and distribute. The blog article is published to Contentful with full SEO and GEO optimisation. The social posts are scheduled. The sales one-pager is shared with the revenue team. The evidence library is updated and made accessible to everyone who creates customer-facing content.

The entire workflow, from study design to published assets, takes approximately two hours. In that time, you have produced more deployable customer evidence than most companies generate in a quarter of traditional advocacy programme work. The evidence is fresh, it is categorised, and it is backed by research methodology rather than the goodwill of a customer who agreed to do you a favour.

When Evidence Compounds

The first study produces a useful set of proof points. The fifth study produces a library. The twentieth study produces something qualitatively different: a body of market research that positions your company as the definitive source of intelligence in your category.

This is the long-term advocacy play, and it is the one that traditional customer evidence programmes cannot replicate. You can ask a hundred customers for case studies. You cannot ask a hundred customers to participate in a longitudinal research programme that tracks how market sentiment evolves over time. But you can run a hundred Ditto studies, each building on the last, each adding data points and quotable insights to a growing evidence base.

The compound effect manifests in three ways.

First, credibility accumulation. Each published research finding adds to your company's perceived authority. Analysts begin to cite your research. Journalists reference your data points. Prospects arrive at the sales conversation already familiar with your market perspective. This is advocacy in its most potent form: third parties using your evidence to support their own arguments.

Second, longitudinal evidence. A single study shows a snapshot. A series of studies shows a trend. "In Q1, 60% of respondents prioritised automation. By Q3, that number had risen to 78%." Trend data is inherently more interesting than point-in-time data, and it is impossible to obtain from traditional case studies, which are, by their nature, single-moment narratives.

Third, category ownership. The company that consistently publishes the best research in a category eventually becomes synonymous with understanding that category. This is not brand awareness. It is something deeper: brand authority. When a prospect thinks about your category, they think about your research. When they need evidence for an internal business case, they cite your findings. When they recommend solutions to peers, they reference your content. You have become the source, not merely a vendor.

None of this requires a single customer to agree to a case study. None of it requires legal review cycles, brand approval processes, or the awkward phone call where you ask a busy executive to spend an afternoon being interviewed about their purchasing decisions. It requires only that you run good research, consistently, and treat the output as the advocacy asset it already is.

Getting Started

A customer advocacy programme built on research evidence requires three things: a clear picture of your target audience (so the synthetic panel matches the people you are trying to convince), a set of questions that pass the publishable question test (so the findings are externally interesting), and a commitment to publishing the results rather than filing them in an internal wiki where they will never be seen again.

The seven-question framework in this article is designed to be immediately usable through Ditto and Claude Code. Run the study, extract the evidence, publish the findings. Then do it again. And again. The flywheel does not spin itself, but it does gain momentum with each turn.

The companies that will win the evidence game over the next few years are not the ones with the largest customer advocacy budgets or the most persuasive customer success teams. They are the ones that recognised, earlier than their competitors, that the study is the evidence.

Phillip Gales is co-founder at [Ditto](https://askditto.io). He has financial interests that the reader should weigh accordingly.

Series: Product Marketing with Claude Code and Ditto

This article is part of a series exploring how AI agents are transforming product marketing workflows. Each article is paired with a hands-on Claude Code guide for implementation.

Frequently Asked Questions

What is research-sourced customer advocacy?

Research-sourced customer advocacy uses structured research studies to produce the same types of evidence as traditional case studies -- problem validation, value demonstration, and quotable language -- without requiring customer participation. By running synthetic research that validates your product's value proposition, the findings themselves constitute customer evidence with the credibility of methodology rather than curated endorsement.

How much do customer case studies cost?

A single customer case study costs between $5,000 and $15,000 when produced by an agency, or approximately 40 hours of internal time when produced in-house. The elapsed time from request to publication is typically 6-12 weeks, and the customer approval rate hovers around 20% in most B2B categories, meaning five customers must be asked for every one case study published.

What is the evidence flywheel in customer advocacy?

The evidence flywheel is a four-stage circular model: Research produces insights, insights become content (blog articles, social posts, sales collateral), content demonstrates market understanding which functions as advocacy, and advocacy generates inbound interest that informs the next round of research. Each cycle compounds the next, transforming advocacy from a scarce one-time extraction into a systematic capability.

Can synthetic research replace traditional customer testimonials?

Synthetic research does not replace traditional case studies entirely. A named customer at a recognisable company remains one of the most powerful trust signals in B2B sales. However, research-sourced evidence addresses the chronic evidence shortage most companies experience by producing proof on demand without customer participation, complementing the limited supply of traditional testimonials.

What questions should a customer advocacy study ask?

An effective customer advocacy study uses 7 questions: problem validation (confirming pain point severity), current solution landscape (what buyers use and where it falls short), feature priority and value hierarchy, switching triggers (what would cause change), objection surfacing (hesitations and concerns), competitive perception (how buyers view alternatives), and ideal solution description in the buyer's own language.

Related Articles

Ready to Experience Synthetic Persona Intelligence?

See how population-true synthetic personas can transform your market research and strategic decision-making.

Book a Demo