Most analyst briefings fail before the analyst even speaks. The problem is not the presentation. It is not the product. It is not even the competitive positioning, though that is usually weaker than the company believes. The problem is the absence of evidence. Companies walk into briefings with Gartner, Forrester, and IDC armed with customer logos, revenue metrics, and a carefully rehearsed narrative about market leadership. The analyst nods politely, asks three questions the company cannot answer with data, and the briefing collapses into a defensive exercise in corporate storytelling. The company leaves believing the analyst "didn't get it." The analyst leaves believing the company has opinions where it should have proof.
Why Analyst Relations Matter More Than Most Companies Admit
Analyst relations is one of those functions that sits in a peculiar organisational limbo. It is not quite marketing. It is not quite sales. It is not product management, though product managers are invariably dragged into the process. In most enterprise companies, analyst relations reports to a senior director who also owns competitive intelligence, market research, and occasionally the coffee machine budget. The function receives enough investment to be visible but rarely enough to be effective.
This underinvestment is difficult to justify on economic grounds. A favourable position in a Gartner Magic Quadrant or a Forrester Wave can influence purchasing decisions worth tens of millions of pounds. Enterprise buyers routinely cite analyst research as a primary input to vendor shortlists. In categories where analyst coverage is comprehensive -- cloud infrastructure, cybersecurity, CRM, marketing automation, HR technology -- the analyst's assessment functions as a de facto filter. If you are not in the report, you are not in the conversation. If you are in the report but poorly positioned, you are in the conversation as a cautionary reference.
The asymmetry is striking. Companies will spend millions on brand advertising that generates awareness, millions more on demand generation that fills the pipeline, and then send a mid-level product marketer into a Forrester briefing with a slide deck that was assembled the previous afternoon. The briefing that will determine whether the company appears as a Leader, a Strong Performer, or a Contender in a report read by every enterprise buyer in the category receives a fraction of the preparation lavished on a trade show booth.
The reason for this asymmetry is not laziness. It is structural. Most companies do not know how to prepare for analyst briefings because the preparation required is fundamentally different from the preparation required for any other audience. Analysts are not customers. They are not prospects. They are not journalists. They are professional evaluators who assess vendors against structured frameworks, and they have seen every version of the "we are the market leader" presentation you can construct.
What analysts want is evidence. Specifically, they want three things: customer evidence that validates your claims, market evidence that supports your view of the category, and competitive evidence that demonstrates you understand your position relative to alternatives. Most companies can produce the first, struggle with the second, and fail entirely at the third.
The remainder of this article addresses how to produce all three, systematically, before the briefing begins.
What Analysts Actually Evaluate (and Why Your Deck Does Not Address It)
To prepare effectively for an analyst briefing, you must first understand what the analyst is evaluating. This sounds obvious. It is not, because most companies prepare for the briefing they want to have rather than the briefing the analyst intends to conduct.
Gartner's Magic Quadrant evaluates vendors along two axes: Completeness of Vision and Ability to Execute. Forrester's Wave uses a scoring methodology across three top-level categories: Current Offering, Strategy, and Market Presence. IDC's MarketScape plots vendors on a two-dimensional framework of Capabilities and Strategies. The specific criteria within each framework vary by category, but the underlying logic is consistent across all three: analysts are assessing whether you have a compelling product, a credible strategy, evidence of market traction, and a defensible competitive position.
The critical word in that sentence is "evidence." Analysts are not persuaded by assertions. They are not moved by vision statements. They are professionally sceptical of roadmaps, because they have seen hundreds of roadmaps that were never delivered. What moves the needle in an analyst briefing is data -- preferably data that the company did not generate about itself.
Consider the typical briefing from the analyst's perspective. They evaluate twenty to thirty vendors per report. Each vendor arrives claiming to be differentiated. Each vendor presents customer logos. Each vendor describes a product roadmap that conveniently addresses the exact gaps the analyst identified in the last report. The vendors who stand out are not the ones with the best slides. They are the ones who bring third-party evidence: customer research, market perception data, competitive analysis conducted from the buyer's perspective rather than the vendor's.
This is the gap that synthetic research fills. Not as a replacement for genuine customer references -- analysts will always want to speak directly to customers -- but as a complementary evidence base that demonstrates something more powerful than individual customer success: systematic understanding of how the market perceives your category, your product, and your competitors.
A company that walks into a Gartner briefing and says "our customers love us" is telling the analyst something the analyst already assumed. A company that walks in and says "we ran a study of 100 synthetic buyers matching your target demographic, and here is how they perceive the competitive landscape in this category" is telling the analyst something the analyst did not know. The difference in credibility is substantial.
The Seven-Question Pre-Briefing Study
The study design that follows has been constructed to map directly to the evaluation criteria used by major analyst firms. Each question targets a specific dimension that analysts assess, and together they produce a body of evidence that addresses the analyst's framework rather than your marketing narrative.
The study should be executed through Ditto using a research group of ten to fifteen synthetic personas matching the buyer profile for your category. If you sell enterprise security software, your personas should be CISOs, security architects, and IT directors at companies in your target segment. If you sell marketing automation, they should be marketing directors, demand generation leads, and marketing operations managers. The specificity of the panel matters enormously. Analysts will immediately discount research conducted with a generic audience.
Question 1: Market Perception of the Category
"How would you describe the current state of the [category] market? What is working, what is broken, and what do you wish existed?"
This question establishes category context. The responses reveal how buyers perceive the market's maturity, its pain points, and its unmet needs. For your briefing, this data serves two purposes. First, it validates (or contradicts) your view of the market, which the analyst will compare against their own research. Second, it identifies the language buyers use to describe the category, which is frequently different from the language vendors use. Analysts notice when a vendor's framing aligns with how buyers actually talk. It signals market intimacy rather than marketing aspiration.
Question 2: Competitive Positioning from the Buyer's Perspective
"If you were evaluating solutions in [category], which vendors would you consider? What does each one do well, and where does each fall short?"
This is the question most companies cannot answer honestly about themselves. Your competitive intelligence probably tells you what your competitors claim. It may tell you what their customers say on G2 or TrustRadius. It rarely tells you how a representative sample of your target buyers perceives the competitive landscape in aggregate. The responses to this question produce a buyer-generated competitive map that is far more credible to an analyst than your internal competitive positioning document, because it was not authored by someone with a vested interest in the outcome.
Question 3: Unmet Needs and White Space
"What are the biggest gaps in the [category] solutions you have used or evaluated? What problem remains unsolved?"
Analyst frameworks invariably include an assessment of innovation and vision. This question generates the raw material for that assessment. If your product addresses an unmet need that a significant proportion of synthetic buyers identify independently, you have evidence that your product strategy is market-driven rather than internally generated. If the unmet needs identified by buyers do not align with your roadmap, you have a different kind of evidence -- the kind that prompts a useful internal conversation before the briefing rather than an uncomfortable one during it.
Question 4: Adoption Barriers
"What would prevent you from adopting a new [category] solution, even if the product itself was excellent? What are the practical obstacles?"
Analysts care deeply about adoption barriers because they affect Ability to Execute scores. A product that is technically brilliant but practically difficult to deploy, integrate, or get approved through procurement will be assessed differently from one that buyers can adopt with minimal friction. The responses to this question reveal whether your go-to-market strategy accounts for the barriers that actually exist in the market, as opposed to the barriers you assume exist based on your sales team's anecdotal feedback.
Question 5: Price-Value Perception
"How do you think about pricing in the [category] market? What feels fair? What feels excessive? How do you compare the value of different vendors' offerings?"
Pricing is a dimension in every major analyst framework, and it is the dimension where most companies are least prepared. Your pricing team knows your price points. They may know your competitor's published pricing. They almost certainly do not know how representative buyers perceive the value-to-price ratio across the competitive set. This question produces that data. If synthetic buyers consistently describe your pricing as fair relative to the value delivered, that is evidence worth presenting. If they describe it as opaque, confusing, or disconnected from the problems they are trying to solve, you have identified a vulnerability the analyst will also identify -- better to know now.
Question 6: Feature Priorities
"If you could only have three capabilities in a [category] solution, which three would you choose and why?"
The constraint of three forces prioritisation. In a category with dozens of features, knowing which three matter most to buyers is strategically critical. If your product excels at two of the top three and your competitor excels at only one, that is a competitive advantage you can quantify. If your product excels at capabilities that buyers rank fourth, fifth, and sixth, you have a positioning problem that no amount of briefing preparation will solve -- but at least you will know about it before the analyst tells you.
Question 7: Vendor Consideration Set
"Which vendors in [category] would make your shortlist today? What would a vendor need to demonstrate to earn a place on your shortlist?"
This question directly addresses market presence, a core dimension in every analyst framework. The responses reveal your share of mind among target buyers. They also reveal the entry criteria for the consideration set, which is invaluable for understanding what it takes to move from "aware" to "considered" in your category. If synthetic buyers consistently cite specific capabilities, certifications, or customer references as shortlist requirements, those become your evidence checklist for the briefing.
What the Study Produces
Running the seven-question study through Ditto and synthesising the results via Claude Code produces four distinct deliverables, each mapped to a component of the analyst's evaluation framework.
Market Perception Report
A structured analysis of how buyers perceive the category: its maturity, its pain points, its trajectory, and its unmet needs. This report is drawn primarily from Questions 1, 3, and 4, and it provides the market context against which the analyst will evaluate your vision. Companies that present a view of the market that aligns with how buyers experience it earn credibility. Companies that present a view of the market that is convenient for their positioning narrative but disconnected from buyer reality lose it.
The market perception report is not a document you present slide-by-slide. It is a reference that informs how you talk about the market during the briefing. When the analyst asks "how do you see the market evolving?", you can answer with buyer-validated perspective rather than marketing-approved speculation.
Competitive Positioning Evidence
A buyer-generated assessment of how your product compares to competitors, drawn from Questions 2 and 7. This is perhaps the most valuable deliverable for analyst briefings, because competitive positioning is the area where companies are most likely to present a biased view and analysts are most likely to challenge it.
Traditional competitive intelligence is inherently one-sided. You research competitors from your vantage point, through the lens of your value propositions and your understanding of the market. Synthetic research inverts this perspective. It shows you how a representative sample of buyers perceives the competitive landscape -- including how they perceive you. The resulting competitive map is credible precisely because it was not constructed by your competitive intelligence team.
For companies preparing for a Magic Quadrant or Wave evaluation, this deliverable maps directly to the "competitive differentiation" criterion that analysts use to distinguish Leaders from the rest of the field. Evidence that buyers perceive meaningful differentiation is worth more than a hundred slides asserting it.
Customer Needs Analysis
A prioritised inventory of what buyers need, want, and cannot currently get, synthesised from Questions 3, 5, and 6. This deliverable serves the "Completeness of Vision" dimension of analyst frameworks. It demonstrates that your product strategy is grounded in systematic understanding of buyer needs rather than internal brainstorming.
The needs analysis also reveals alignment gaps. If your roadmap priorities do not match buyer priorities, the analyst will discover this eventually. Better to discover it yourself, adjust your narrative accordingly, and present the gap as evidence of strategic awareness rather than strategic blindness.
Vendor Consideration Data
A quantitative picture of shortlist dynamics in your category, drawn from Question 7. This addresses the "Market Presence" dimension that analyst frameworks assess. The data shows which vendors occupy mindshare among your target buyers, what the entry criteria are for the consideration set, and where your brand sits relative to competitors in terms of awareness and perceived relevance.
This deliverable is particularly useful for companies that are newer to a category or have recently repositioned. If synthetic buyers include you on their shortlist at rates comparable to established competitors, that is evidence of market traction that goes beyond customer count and revenue. If they do not include you, the reasons they cite for excluding you become your roadmap for improving market presence before the next evaluation cycle.
Mapping the Study to the Analyst's Framework
The study design described above is intentionally general. For maximum impact, it should be customised to the specific analyst and the specific framework they are using. This is where Claude Code's orchestration capability becomes particularly valuable.
If you are preparing for a Gartner Magic Quadrant evaluation, you can research the published criteria for your category's Magic Quadrant and design study questions that map to each criterion. Gartner publishes its evaluation criteria in the "Inclusion Criteria" and "Evaluation Criteria" sections of each Magic Quadrant report. The Completeness of Vision axis typically includes market understanding, marketing strategy, sales strategy, offering strategy, business model, innovation, and geographic strategy. The Ability to Execute axis covers product, overall viability, sales execution, market responsiveness, customer experience, and operations.
A Claude Code workflow for Magic Quadrant preparation would proceed as follows. First, review the most recent Magic Quadrant report for your category and extract the specific evaluation criteria. Second, design study questions that map to each criterion, with particular attention to the criteria where your current evidence is weakest. Third, execute the study through Ditto with a research group that matches the buyer profile described in the Magic Quadrant's market definition. Fourth, synthesise the results into a briefing document structured around the analyst's evaluation framework, not your marketing framework.
The same approach applies to Forrester Wave evaluations. Forrester publishes its evaluation criteria as a weighted scorecard across Current Offering, Strategy, and Market Presence. Each dimension includes sub-criteria with explicit weightings. A pre-briefing study designed around those specific sub-criteria produces evidence that is directly relevant to the analyst's assessment process. When you present data that addresses their exact evaluation criteria, you signal preparation, market sophistication, and a seriousness of engagement that most vendors do not demonstrate.
For IDC MarketScape evaluations, the framework assesses Capabilities (product functionality, go-to-market, business model) and Strategies (growth strategy, innovation, customer experience strategy). The study questions can be adapted to target each dimension, producing evidence that speaks the analyst's evaluative language.
The principle is consistent across all three firms: design the study around the analyst's framework, not yours. The analyst is not evaluating your marketing story. They are evaluating your product, strategy, and market position against a structured rubric. Evidence that maps to that rubric is infinitely more persuasive than evidence that maps to your internal narrative.
The Pre-Emptive Advantage
The conventional approach to analyst relations is reactive. The analyst announces an evaluation cycle. The company scrambles to prepare. The briefing happens. The company waits for the report. The report is published. The company either celebrates or complains, depending on where they landed.
The pre-emptive approach inverts this sequence. You run the study before the evaluation cycle begins, ideally months before. You arrive at the briefing not with hastily assembled evidence but with a systematic, third-party-grade body of research that demonstrates how the market perceives your category, your product, and your competitors.
The timing matters for two reasons. First, it gives you time to act on the findings. If the study reveals that buyers perceive a competitor as stronger in a dimension you assumed was your advantage, you have time to investigate, adjust your messaging, or address the underlying product gap before the briefing. Reactive preparation leaves no room for remediation. Pre-emptive preparation creates a feedback loop between research and action.
Second, the pre-emptive approach changes the dynamic of the briefing itself. Most briefings are vendor presentations -- the company talks, the analyst listens, the analyst asks uncomfortable questions, the company improvises answers. A company that arrives with pre-briefing research can conduct a different kind of conversation: "We ran this study. Here is what the market told us. Here is how we interpret it. Here is what we are doing about it." The briefing shifts from presentation to dialogue, which is precisely what analysts prefer.
The cost of the pre-emptive approach is negligible relative to the stakes. A Ditto study with ten to fifteen personas costs a fraction of what companies routinely spend on analyst relations subscriptions, advisory days, and briefing preparation hours. The return -- arriving at the most consequential evaluation in your market with genuine, buyer-validated evidence rather than internally-generated assertions -- is difficult to overstate.
There is a secondary benefit that is easy to overlook. The act of running the study frequently reveals things the company did not know about its own market position. The market perception report may confirm your strategic assumptions, in which case you proceed with confidence. It may contradict them, in which case you have discovered something valuable before the analyst discovered it for you. Either outcome is preferable to the alternative, which is walking into the briefing with untested assumptions and discovering their weakness in real time.
Building an Ongoing Analyst Evidence Programme
A single pre-briefing study is valuable. An ongoing programme of market research that feeds analyst relations continuously is transformative.
The most sophisticated analyst relations teams do not treat briefings as isolated events. They maintain a rolling body of evidence -- customer research, market data, competitive intelligence, product usage analytics -- that they draw upon for every analyst interaction. The challenge, historically, has been that assembling this evidence base was expensive and slow. Each study required budget approval, vendor procurement, fieldwork, analysis, and reporting. By the time the evidence was ready, the next evaluation cycle was upon them and the data was stale.
Synthetic research removes the cost and speed constraints. A quarterly market perception study, run through Ditto and orchestrated by Claude Code, costs less than a single analyst advisory day and delivers results in hours rather than weeks. Over the course of a year, four quarterly studies produce a longitudinal dataset that shows how buyer perceptions are evolving -- which is precisely the kind of evidence that moves an analyst from "this company understands its market" to "this company has systematic market intelligence that informs its strategy."
The quarterly cadence also aligns with the rhythm of analyst engagement. Most companies have two to four substantive analyst interactions per year: an annual briefing, one or two strategy days, and ad hoc inquiry calls. Each of these interactions benefits from fresh evidence. A market perception study run in Q1 informs the annual briefing. A competitive positioning study run in Q2 informs the mid-year strategy session. A feature priority study run in Q3 informs the product roadmap discussion. A category evolution study run in Q4 informs the planning cycle for the following year.
The compounding effect of this approach is significant. By the second year, you have eight quarters of longitudinal data showing how your market position has evolved, how buyer needs have shifted, and how competitive dynamics have changed. This is the kind of evidence that elevates analyst conversations from "tell me about your product" to "show me how the market is moving." The former is a vendor briefing. The latter is a strategic dialogue. Analysts prefer the latter, and they reward the companies that make it possible.
For companies that also run competitive battlecard and positioning validation studies through Ditto, the analyst evidence programme becomes a natural extension of existing workflows. The competitive data that informs your sales team also informs your analyst briefings. The positioning research that guides your messaging also guides your analyst narrative. The infrastructure is the same. The audience is different. The rigour compounds.
Getting Started
An analyst relations preparation programme requires three things: clarity about which analysts matter to your business, access to the evaluation frameworks they use, and a willingness to discover what the market actually thinks of you rather than what your internal narratives assert.
The last requirement is the most demanding. Analyst briefings are high-stakes events, and the temptation to present only flattering evidence is considerable. Resist it. Analysts are professionals who evaluate dozens of vendors. They can distinguish between a company that understands its market position, including its weaknesses, and one that has constructed an elaborate fiction about its own excellence. The former earns respect and, ultimately, better placement. The latter earns a polite nod and a middling score.
The seven-question study described in this article is available through Ditto and can be executed via Claude Code in a single session. For companies facing an upcoming evaluation cycle, the pre-emptive study should be run at least six weeks before the briefing, leaving time to absorb the findings and adjust your narrative. For companies building a longer-term analyst evidence programme, the first study establishes the baseline against which all subsequent waves are measured.
The analysts are not your adversaries. They are evaluators who want to do their jobs well, which means they want evidence they can trust. Give them that evidence, produced systematically and presented honestly, and the briefing takes care of itself.
Phillip Gales is co-founder at [Ditto](https://askditto.io). He has financial interests that the reader should weigh accordingly.
The Claude Code and Ditto for Product Marketing Series
This is part of a series exploring how AI agents handle the core disciplines of product marketing. Each article covers one function of the PMM stack, explains the methodology, and links to a companion Claude Code guide you can run yourself.
Part 1: How to Develop Product Positioning (guide)
Part 2: How to Build Competitive Battlecards (guide)
Part 3: How to Research Pricing (guide)
Part 4: How to Test Product Messaging (guide)
Part 5: How to Run Voice of Customer Research (guide)
Part 6: How to Segment Customers (guide)
Part 7: How to Validate GTM Strategy (guide)
Part 8: How to Build a Content Marketing Engine (guide)
Part 9: How to Build Sales Enablement Materials (guide)
Part 10: How to Research a Product Launch (guide)
Part 17: How to Prepare for Analyst Relations with Claude Code and Ditto -- this article


