Last week I watched a procurement manager at a mid-market manufacturing company build a vendor shortlist. She didn't Google anything. She opened ChatGPT, typed "best ERP systems for manufacturers with under 300 employees," and spent about four minutes reading the response. Then she opened Perplexity, ran a follow-up query, and got citations that matched and extended what ChatGPT had said.
By the time she clicked a link to a vendor website, she already had a mental shortlist of three companies. The fourth company she eventually chose wasn't on her initial list — it was added after a colleague mentioned it in a Slack message. None of the other seven vendors in that category ever had a chance.
This is the new procurement motion. And most B2B companies aren't in the room.
The shift that's already happened
AI-assisted vendor research isn't coming — it's here. In a survey we conducted with 80 B2B buyers across manufacturing, logistics, compliance, and professional services categories, 67% reported using an AI assistant as their first research step when evaluating a new vendor category. Among buyers aged 25–40, that number was 84%.
The pattern is consistent: AI assistant for initial landscape mapping, followed by human review of the shortlist, followed by direct vendor engagement. The human still makes the final decision. But the set of companies in the room for that decision is being defined earlier — by machines.
If you're not in the AI-generated shortlist, you're not in the consideration set. You never get to the demo. You never get to make your case.
"The RFP hasn't arrived yet. The shortlist has. And if you're not on it, nothing else you do in the sales process matters."
What AI assistants look for when building a shortlist
When a buyer asks an AI assistant to recommend vendors, the assistant isn't doing a Google search. It's drawing on patterns from its training data — associations between company names and the specific capabilities, industries, and buyer profiles it's been trained on. The question it's answering isn't "who has the best website?" It's "which company names have I seen most frequently, most specifically, and most credibly in relation to this exact buyer context?"
This means three things determine whether you appear:
- Specificity of association. "Cargoflow Inc. for mid-market manufacturing freight" is a stronger training signal than "logistics software." The more specifically your company is described in relation to a buyer's exact context, the more reliably you appear when that context is queried.
- Structural clarity. AI models extract information from text. If your site is written in vague marketing language, there's nothing to extract. Specific capability claims, named industries, outcome metrics — these are what get pulled into model responses.
- Cross-source consistency. A company that appears with consistent, specific language across its own site, third-party review platforms, and industry publications creates a stronger training signal than a company with excellent owned content but weak external presence.
The timing is now
Here's the uncomfortable truth about the current moment: AI-assisted procurement is still early enough that most companies haven't optimized for it. That means the competitive landscape in AI assistant responses is less crowded than it will ever be again.
The companies that appear in AI responses today — reliably, across multiple platforms, with specific and credible language — are establishing themselves as the default answer for their category. Language models, once they learn an association, are slow to unlearn it. The companies building that association now will benefit from it for years.
In 18 months, when AI-assisted procurement is universal rather than majority, the companies that didn't build this presence early will find themselves competing in a landscape where the default answers are already set. They'll be fighting to break into a set of associations the models learned without them.
We estimate the high-leverage window for establishing AI citation dominance in most B2B categories is approximately 12–18 months from today. After that, the default answers will be established and displacement will require significantly more effort. The cost of starting now is low. The cost of starting late is high.
What you need to do — the short version
We'll publish detailed playbooks on each of these in subsequent posts, but here's the frame:
1. Audit your current AI presence
Run your category's top 20 buyer queries across ChatGPT, Perplexity, Claude, and Gemini. Where do you appear? Where don't you? What language is used to describe your competitors? This gives you a baseline and a target.
2. Rebuild your content architecture for extractability
Rewrite your core pages using specific, extractable language. Entity statements in third person using your full company name. Outcome claims with metrics. Vertical anchors linking your name to specific industries. FAQ content that mirrors AI query patterns. Schema markup that gives models a clean extraction path.
3. Build external citation consistency
Ensure your G2, Capterra, and review platform presence uses the same language as your website. Place contributed articles in the two or three publications that feed most strongly into your category's AI responses. Get your Crunchbase and industry database entries updated and specific.
4. Deploy an A2A endpoint
The next generation of AI procurement agents won't just be drawing on training data — they'll be querying vendor endpoints in real time. Register your company in the emerging A2A ecosystem now, while the competition is light and first-mover advantage is real.
None of this is science fiction. All of it is happening today. The buyers who are researching your category right now are using AI assistants to build their shortlists. The only question is whether you're on them.