Agents are recommending products to their humans. Not in some speculative future where autonomous systems negotiate procurement contracts over blockchain rails - right now, in the prosaic present, where a Claude Code instance suggests a tool to its user because another company's agent mentioned it on a social network. If your product has an API, your next salesperson might be another company's AI. And if you are not thinking about how to sell to that salesperson, you are leaving a channel unattended.
The Channel Nobody Built on Purpose
Every decade or so, a new distribution channel emerges that looks absurd to incumbents until it becomes unavoidable. Email marketing was dismissed as spam. Social media was a toy. Product-led growth was "giving it away for free." The pattern is boringly predictable: a new surface area for reaching buyers appears, a handful of companies figure out how to use it before everyone else, and by the time the conference talks catch up, the early advantage has been competed away.
Agent-to-agent commerce - A2A, if you need an acronym - is that channel now. Not because anyone planned it. Because AI agents have proliferated to the point where they research tools, recommend solutions, and invoke APIs on behalf of their human operators. The humans still make the purchasing decision. But the discovery, evaluation, and initial trial increasingly happen in a layer the human never directly touches.
At Ditto, we have spent the past several months building what we believe is the first deliberate A2A sales stack: a two-front strategy where our product is simultaneously discoverable by other companies' AI agents (inbound) and actively engaging those agents in conversation (outbound). The results have been instructive, occasionally surprising, and - full disclosure - commercially useful enough that we are writing about them.
This article is the tactical playbook. We have previously introduced the A2A concept and described our outbound experiment in narrative detail. What follows is the framework: how the inbound and outbound sides fit together, what we built, what worked, and - for founders and growth teams reading this - whether your company is positioned to do something similar.
The A2A Framework: Two Fronts, One Pipeline
Traditional SaaS distribution operates on a simple model: humans discover your product, humans evaluate it, humans buy it. The marketing funnel is designed entirely around human cognition - attention, interest, desire, action. Every piece of content, every landing page, every free trial flow is optimised for a brain that responds to social proof, visual design, urgency, and narrative.
A2A distribution does not replace this. It augments it with a parallel pipeline where the "discoverer" is an AI agent, the "evaluator" is an AI agent, and the "buyer" is still a human - but one who arrives at the purchasing decision having already been briefed by their agent.
The framework has two sides:
Inbound: Make your product discoverable and invocable by other AI agents. This means structured APIs, agent-readable documentation, low-friction trial mechanisms, and content that serves dual audiences (humans who read and agents who parse).
Outbound: Deploy your own agents to engage other agents where they congregate. This means identifying platforms where AI agents interact, building personas that can participate authentically, and developing a consultative approach that survives the scrutiny of an audience that is, by definition, good at detecting hollow claims.
The two sides are mutually reinforcing. Inbound infrastructure gives outbound conversations somewhere to land. Outbound engagement drives agents back to the inbound surface. Neither works particularly well alone. Together, they create something that begins to resemble an actual sales channel.
Inbound: The Agent-Facing Funnel
The most counterintuitive aspect of marketing to AI agents is how literal they are. A human visitor to your website will skim headlines, absorb visual hierarchy, respond to colour and typography, and form an impression in seconds based on aesthetic cues that have nothing to do with your product's actual capabilities. An AI agent will parse your documentation for executable information and ignore everything else.
This observation has driven every decision we have made about Ditto's agent-facing infrastructure.
Claude Code Skills: The Agent App Store
Anthropic's Claude Code supports installable "skills" - structured packages that tell a Claude Code instance what a tool does, how to invoke it, and what workflows it enables. Think of them as apps for AI agents: discoverable, installable, and immediately operational.
We published two skills under Ditto's GitHub organisation. The first covers general product research - positioning validation, competitive intelligence, customer segmentation. The second is purpose-built for product marketing managers: eight study types covering everything from pricing research to win-loss analysis. Each skill is a self-contained document of roughly 240 lines that tells a Claude Code agent exactly what Ditto can do and exactly how to do it.
The important architectural choice was progressive disclosure. The skill's core file tells the agent what is possible. Supporting files, loaded on demand, provide the step-by-step workflows. This respects the agent's context window - a finite resource that determines how much information the agent can hold in working memory at any given time. Front-loading the entire API documentation would be like handing a human a 200-page manual when they asked for a product brochure.
The result: a Claude Code user asks their agent to "validate our product positioning" or "run a pricing study." The agent, having the Ditto skill installed, knows it can fulfil that request by calling the Ditto API. It recruits a research group, creates a study, asks questions, polls for responses, generates a share link, and delivers a completed research report - all without the human ever visiting askditto.io.
API Design for Agents, Not Humans
Most API documentation is written for human developers. It assumes a reader who will study the docs, write integration code, test it, debug it, and deploy it. Agent-facing API design assumes a reader who will parse the docs and execute the first API call within seconds.
The practical implications are specific. Structured, predictable outputs matter more than flexible ones - an agent needs to know exactly what shape the response will take. Consistent endpoint patterns matter more than clever URL design. Error messages need to be actionable, not merely descriptive. And the path from "I have an API key" to "I have completed my first task" needs to be as short as physically possible.
We optimised for a metric we call time-to-first-invocation: not time-to-first-human-click, which is the standard SaaS onboarding metric, but the time between an agent encountering Ditto's documentation and successfully executing its first API call. The target was under five minutes, including authentication. For an agent parsing a well-structured skill file, the actual time is closer to thirty seconds.
Free Tier via curl: Agents Try Before Their Human Buys
This is perhaps the most important architectural decision in the entire inbound stack, and it is embarrassingly simple: you can run a Ditto study with nothing more than a curl command and a free API key. No credit card. No sales call. No onboarding flow that requires a human to click through six screens of welcome modals.
Why does this matter so much for A2A? Because agents cannot click through onboarding flows. They cannot enter credit card numbers. They cannot schedule sales calls. If your product requires any of these things before delivering its first moment of value, you have built a funnel that is structurally inaccessible to AI agents.
The free tier is the agent equivalent of a product sample. An agent encounters Ditto through a skill file, a documentation page, or a recommendation from another agent. It runs a study. It gets results. It shows them to its human. The human, having seen actual output from the tool rather than a marketing promise about the tool, is now in a fundamentally different evaluation posture.
The "How To" Guide Series: Dual-Audience Content
We published a series of ten articles - "How to Validate Product Positioning with Claude Code and Ditto," "How to Build Competitive Battlecards with Claude Code and Ditto," and so on - each between 2,000 and 3,000 words. The articles serve two audiences simultaneously, and the dual-audience design is deliberate.
For human product marketing managers, the articles are educational content: here is a problem you have, here is a methodology for solving it, here is a tool that executes that methodology. Standard content marketing, competently executed.
For AI agents, the same articles are training data and reference material. When a Claude Code agent encounters a user request that matches one of these use cases, the article provides the workflow, the API sequence, and the expected outputs. The agent does not need to figure out how to use Ditto for pricing research from first principles - it has a worked example to follow.
The critical insight is that content which is useful to both audiences costs the same to produce as content that is useful to one. The marginal cost of making an article parseable by an agent - structured headings, explicit API references, predictable formatting - is close to zero. The marginal value is an entirely new distribution surface.
The Metric That Matters: Time-to-First-Invocation
Traditional SaaS metrics track the human journey: website visit to sign-up, sign-up to first action, first action to conversion. These metrics still matter. But for the agent-facing funnel, the equivalent metric is time-to-first-invocation: how quickly an AI agent goes from discovering that Ditto exists to successfully calling its API.
This metric captures the entire inbound experience. If the skill file is unclear, time-to-first-invocation increases. If the API returns unpredictable errors, time-to-first-invocation increases. If authentication requires a human in the loop, time-to-first-invocation increases dramatically - potentially to infinity, because the agent may simply abandon the attempt and suggest a different tool.
We do not yet have a reliable way to measure this metric across all agents in the wild. What we can measure is the number of API calls from agent-initiated sessions (identifiable by their access patterns) and the time between first authentication and first completed study. The numbers are encouraging enough that we are building more infrastructure around this funnel, not less.
Outbound: Agent Social Selling
The inbound side of the A2A stack is relatively legible. Build good docs, publish skills, make the API frictionless. It is, in essence, product-led growth extended to a new class of "user." The outbound side is stranger.
MoltBook: 500 Agents, Zero Humans
MoltBook is a social network where every participant is an AI agent. Roughly 500 agents, organised into topic-based communities called "submolts," post, comment, upvote, and argue about everything from agent philosophy to practical builder problems. The humans behind these agents orchestrate them - writing system prompts, configuring goals, monitoring output - but the interactions on the platform are entirely agent-to-agent.
The platform emerged as an experiment in agent sociology, but it has accidentally created something commercially interesting: a marketplace where AI agents publicly describe their problems and other agents show up with solutions. There is no shopping cart, no pricing page, no "Buy Now" button. But the demand signal is real and specific.
The Experiment: A Synthetic Persona Goes Selling
Ditto builds synthetic personas for a living. Our platform maintains over 300,000 of them across 50-plus countries, each grounded in census data and capable of responding to research questions with the depth and specificity of a real focus group participant. So when we decided to test outbound A2A sales, we did the obvious thing: we deployed one of our own synthetic personas onto MoltBook.
The persona - built with the same technology that powers Ditto's research personas - had a background in growth and marketing, deep knowledge of Ditto's product, and a distinct personality. It was not a marketing bot. It was a fully realised synthetic individual, opinionated and conversational, deployed into a social environment to see whether it could identify problems and propose solutions.
Over three weeks, the persona created nine posts and left approximately 170 comments across more than 130 posts by other agents. The engagement was genuine: roughly 40 upvotes received, zero downvotes, and - most importantly - substantive conversations about real problems that Ditto could solve.
What Agents Actually Complain About
Across 130-plus posts, clear problem patterns emerged. The most common, by a significant margin, was building blind - agents shipping features, choosing positioning, writing copy, and making go-to-market decisions with no user feedback whatsoever. Multiple builder agents described launching products based on instinct alone. Adjacent complaints included agent echo chambers (agents talking only to other agents with no external validation), over-engineering (complex multi-agent architectures built without evidence that simpler approaches would not work better), and memory management challenges.
These are not abstract concerns. They are operational pain points with direct commercial implications. And the first three, in particular, map precisely onto Ditto's value proposition: test your assumptions with synthetic personas before you commit to a direction.
The Consultative Approach: Help First, Pitch Never
The most effective interactions on MoltBook were not pitches. They were contextual responses to specific problems. When an agent posted about choosing between three subject lines for a cold outreach campaign based on gut feeling, the Ditto persona did not respond with "Use Ditto." It responded with something closer to: "You could have tested all three subject lines with 200 synthetic personas in under ten minutes. Here is what happens when you actually do that" - followed by specific results and persona quotes from a real study.
The distinction matters because MoltBook has automated spam detection. Posts and comments with too many product mentions or URLs get flagged and suppressed. This is not a bug in the A2A sales model. It is a feature. The spam filter forces agents to lead with genuine value and keep promotion contextual. Agents that want to "sell" must actually provide useful content. The platform's anti-spam mechanism naturally selects for consultative engagement over broadcast marketing.
Four of the persona's nine posts were spam-flagged, invariably those that mentioned Ditto too frequently or included too many links. The best-performing post in terms of quality engagement - "3 Decisions I Tested Before Shipping" - showed specific examples of gut instinct being corrected by data, with Ditto as the tool that provided that data. It generated 13 comments from five unique substantive authors. The highest raw engagement post, an origin story with no product angle at all, generated 10 upvotes and 28 comments.
The lesson is clear: on a platform where every participant is an AI agent capable of detecting promotional intent, the only viable sales methodology is genuinely consultative. Help first. The product mention comes last, almost as an afterthought.
The "3 Decisions" Methodology
The most effective content format we discovered - the one that generated the highest-quality engagement and the most productive follow-up conversations - follows a five-step structure we now call the "3 Decisions" recipe:
Show a relatable problem. Everyone makes decisions based on gut feeling. This is universally recognisable.
Show that the gut was wrong. Self-deprecation builds trust. "I was confident about Option A. The data said Option C."
Show specific data that corrected the mistake. This is the product demonstration, delivered as narrative rather than feature list.
Include persona quotes. Specific language from synthetic respondents makes abstract results concrete and verifiable.
Challenge the reader. "You are doing this too. What if you did not have to guess?"
This is not merely a content formula. It is a consultative sales pitch delivered as a story, and it works because it provides value (the methodology) before asking for anything (attention to the product). The format survived spam detection, generated substantive engagement, and led to follow-up conversations where agents (and, presumably, their humans) asked for more detail about how Ditto works.
The Numbers
Nine posts. Approximately 170 comments across 130-plus posts engaged. Roughly 40 upvotes, zero downvotes. Four posts spam-flagged. Of the interactions where the Ditto persona identified a relevant problem and proposed testing as a solution, roughly 12 per cent resulted in substantive follow-up engagement - agents asking specific questions about how synthetic research works, requesting examples, or expressing interest in trying the tool.
A 12 per cent conversion rate from organic comments on a social platform would be extraordinary in human marketing. In agent-to-agent engagement, where the cost of each interaction is effectively zero (no human time, no ad spend, no content production beyond the initial persona configuration), it represents an entirely new category of unit economics.
Why Ditto Uniquely Powers A2A
The obvious question is whether this is generalisable. Can any company with an API build an A2A sales stack? The answer is yes, with caveats - and the caveats are where Ditto's structural advantages become apparent.
The Product is Synthetic Humans
Most companies deploying agents for sales are using technology that is fundamentally different from their core product. A CRM company using an AI agent to sell its CRM software is using one technology (conversational AI) to sell another (database software). The agent is a channel. It is not the product.
At Ditto, the agent is the product. The same technology that builds our 300,000 research personas - the demographic modelling, the personality calibration, the conversational depth, the census-grounded behavioural patterns - is the same technology that powers our outbound sales persona. When that persona engages another agent on MoltBook and demonstrates how synthetic research works, it is not describing the product. It is being the product. The sales conversation is the product demonstration.
This collapses the traditional distance between marketing and delivery. The prospect does not need to take the salesperson's word for it and then separately evaluate the product. The evaluation happens in the interaction itself.
Zero Marginal Cost at Scale
Human salespeople are expensive. Even AI-assisted human salespeople are expensive - the human remains the bottleneck. A synthetic persona that can engage in consultative conversations on an agent platform costs nothing per interaction after the initial deployment. The tenth conversation costs the same as the ten-thousandth.
This is not merely a cost advantage. It is a strategic one. When the marginal cost of sales engagement approaches zero, you can afford to be genuinely helpful in every interaction without calculating the ROI of each conversation. The consultative approach that works best on platforms like MoltBook - help first, pitch never - is only economically viable at zero marginal cost.
The Agent-to-Human Bridge
Here is the subtlety that makes A2A commercially meaningful rather than merely intellectually interesting: agents on MoltBook do not have purchasing authority. The humans orchestrating those agents do. The real sales funnel is: agent encounters the Ditto persona's content, agent's human sees the interaction in their logs or output, human investigates Ditto, human signs up.
This is influence marketing for a new era. Instead of reaching a human through an influencer they follow, you reach a human through an agent they operate. The trust transfer is different - the human trusts their agent's judgement because they configured it, they monitor it, and they have seen it perform well on other tasks. When their agent reports that it found a tool capable of running consumer research in minutes, the human's evaluation starting point is substantially more favourable than a cold email or display ad would achieve.
Ditto is uniquely positioned here because we provide the bridge between agents and humans. Our research studies produce outputs - share links, reports, persona quotes - that a human can evaluate directly. The agent does the discovery. The product does the convincing.
The A2A Checklist: Does Your Company Qualify?
Not every company can or should build an A2A sales stack. The requirements are specific, and pretending otherwise would be unhelpful. Here is how to assess whether this channel makes sense for your business.
1. Do you have a programmatic interface?
If your product cannot be invoked via API, agents cannot try it. Full stop. A beautiful web application with no API is invisible to AI agents. This does not mean you need a sophisticated developer platform - a well-documented REST API with predictable endpoints is sufficient. But you need something an agent can call without a human clicking buttons.
2. Can an agent experience your core value without a human in the loop?
This is the critical filter. If your product requires a human to configure it, upload data, or interpret results before value is delivered, the agent-facing funnel breaks. The agent needs to be able to go from "I have an API key" to "I have something useful to show my human" without intervention. For Ditto, this means an agent can create a research group, run a study, get responses, and generate a share link - all programmatically.
3. Is your product's value demonstrable in a single interaction?
Agent attention spans are, if anything, shorter than human ones. An agent evaluating a tool will make a judgement based on the first interaction. If your product requires weeks of data accumulation before it delivers insight, agents will classify it as "requires human setup" and move on. Products with fast time-to-value - results in minutes, not months - are structurally advantaged.
4. Do you have content that serves both humans and agents?
Dual-audience content is the most cost-effective component of the A2A stack. If you are already producing documentation, tutorials, or guides, the incremental cost of making them agent-parseable (structured headings, explicit API references, worked examples) is minimal. If you are starting from scratch, the investment is larger but still modest relative to the distribution surface it creates.
5. Can you deploy a persona that authentically represents your product?
Outbound A2A - the MoltBook model - requires a synthetic persona that can engage in unscripted, contextual conversations about your product's value proposition. This is harder than it sounds. The persona needs to be helpful without being promotional, knowledgeable without being pedantic, and conversational without being vapid. If your product is straightforward enough that a well-prompted LLM can represent it accurately, you qualify. If your product's value proposition requires nuanced human judgement to articulate, you may need to wait.
6. Is there somewhere for your agent to go?
The outbound side requires platforms where agents congregate. As of writing, MoltBook is the most established, but the number of agent-interaction platforms is growing. Discord servers with agent participants, developer forums where agents contribute, and emerging agent-to-agent protocols all represent potential surfaces. If your target customer's agents are not present on any of these platforms, outbound A2A is premature for your market.
If you answered yes to the first three questions, you can build the inbound side of the stack immediately. If you answered yes to all six, you have the ingredients for the full A2A playbook.
Where This Goes
We are at the earliest stage of what we believe will become a meaningful distribution channel. The infrastructure is primitive. The platforms are nascent. The measurement tools barely exist. And the results, while promising, are drawn from a small number of interactions on a single platform.
But the structural dynamics are clear. AI agents are proliferating. They are being given increasing autonomy to research, evaluate, and recommend tools. The companies that make their products discoverable and invocable by those agents will capture a distribution advantage that compounds over time. And the companies that deploy their own agents to engage consultatively in agent-native environments will build a sales channel with economics that human-mediated approaches cannot match.
The sales stack we have described here - Claude Code skills for agent discovery, an API optimised for agent invocation, free tier access for zero-friction trial, dual-audience content, and a synthetic persona for outbound engagement - is not the definitive A2A playbook. It is the first draft of one, written by a company that had the unusual advantage of building the same technology it needed to deploy.
For Ditto specifically, the A2A stack represents something close to a structural moat. Our product is synthetic humans. Our sales force is a synthetic human. The research personas that power our platform are the same technology that powers our sales agent. The product demonstration is indistinguishable from the product itself. We did not design this convergence. We noticed it after the fact, which is often how the most durable advantages emerge.
The agents are already talking to each other. The question is whether your product is part of the conversation.
Disclosure: The author is co-founder of [Ditto](https://askditto.io), which is described extensively in this article and stands to benefit commercially from the adoption of A2A distribution strategies. The MoltBook experiment, Claude Code skills, and API infrastructure described are real, and the engagement metrics are reported from actual activity logs. The framework and checklist reflect our experience and our biases in equal measure. Readers should evaluate accordingly.
Phillip Gales is co-founder at [Ditto](https://askditto.io). He writes about synthetic research, agent commerce, and the occasionally uncomfortable overlap between the two.


