How to Track Whether ChatGPT, Claude, and Perplexity Name Your Brand
By The Vyrable Team
For two decades, "are we visible?" meant one thing: where do we rank on Google?
For one decade after that, it also meant: do we appear in social feeds, in the box on the right, in the people-also-ask carousel?
Ask the question today, and the honest answer is more uncomfortable. The fastest-growing slice of buyer-intent traffic doesn't go to a search engine at all. It goes to a chat window. People ask ChatGPT, Claude, Perplexity, and Gemini questions like "what's the best CRM for a 10-person agency?" — and the model answers with a list of three to five names.
Either you're on the list, or you're not. Nobody clicks past the first answer in a chat interface. There's no second page.
This is the hard part of AI visibility, and most marketers haven't started measuring it yet.
The new visibility stack
Search visibility has three layers — keywords you rank for, the click-through to your site, the time on page. AI visibility has different layers, and they require different instruments.
Layer 1 — knowledge presence. Does the model know your brand exists? Has it been mentioned in enough sources during training that the LLM can recall you when prompted?
Layer 2 — answer position. When the model lists vendors, where do you fall — first, third, ninth? Position matters in a chat interface even more than in a search results page, because nobody scrolls.
Layer 3 — competitive context. Who's named alongside you? Are you bracketed with the right peers, or are you stuck between a discontinued competitor and a side-project nobody uses?
Together, these three layers form what we call your AI Knowledge Baseline — the model's snapshot of who you are, where you fit, and how confidently it can recommend you to a buyer.
Why this matters now, not later
The cynical take is that AI visibility is a 2027 problem. The reality is that buyer behaviour has already shifted.
OpenAI published research in 2025 showing that the dominant uses of ChatGPT split roughly into commercial intent ("compare X and Y", "what should I buy"), informational intent ("explain X"), and generative intent ("write me Y"). The first bucket — commercial intent — is the same set of queries that fill the bottom of every B2B funnel. They're high-value, late-stage, decision-driving queries. And they're moving from search to chat.
If your brand isn't named when an LLM answers them, you're invisible to that traffic. It doesn't matter how clever your retargeting pixel is.
The good news: this is a tractable, measurable problem. The bad news: most measurement tools haven't caught up.
What a measurement framework looks like
A workable framework needs to do four things:
1. Watch a curated list of buyer-intent prompts. Not generic questions; the actual questions your real prospects ask before they buy. Five for a starter set, twenty to thirty for serious tracking.
2. Run them on a cadence. Once a quarter is meaningless. Once a week is a starting point. Once a day is what you want as soon as the data justifies it, because LLM training data shifts faster than you'd think.
3. Track three signals per scan. Whether your brand was mentioned at all. Where in the response it appeared (position 1 versus position 7). Which competitors got named in the same answer.
4. Report the deltas, not just the snapshot. A score of 60% mention rate is meaningless without the prior week's 70%. The story is the trend, not the number.
You can build this yourself with a half-day of Python and the OpenAI / Anthropic / Perplexity APIs. Most teams won't, because the tooling around alerts, drift detection, competitor delta tracking, and content briefs is the actual work — the API calls are five minutes.
A five-prompt experiment anyone can run
If you want to know where your brand stands without building anything, do this in twenty minutes:
1. Pick five questions a buyer would ask in your category. Not "what's a good marketing platform" — too broad. Try "what's the best AI content tool for solo founders" or "best alternative to [your biggest competitor] for small teams". Specific. Buyer-shaped.
2. Run them through ChatGPT (or any frontier model). Plain-English prompts, no system instructions, no tricks.
3. Score each response on three things. Was your brand named? Was a competitor named? If both, who came first?
4. Read the response carefully. When you're not named, who *is*? Is it a fair comparison set, or is the model recommending things that aren't really your competitors? That tells you whether the model has the wrong category model entirely, or just hasn't heard of you.
5. Run it again next month. The shift between runs is the data point. A brand that was invisible in January and getting named in March has done something that worked. A brand that's slipping is losing ground while you watch.
If you want this in a single click instead of a notebook, you can do it with the free public AI visibility check — paste your brand and a buyer-intent question, and we'll run a real LLM scan and return the result with brand and competitor mentions highlighted inline. It's free, no signup, three scans per IP per day.
What to do when you're invisible
The first scan is usually demoralising. Your brand isn't named. Three competitors are. Now what?
The wrong answer is to write a generic blog post and hope. LLMs don't index in the same way Google does. Pumping out two thousand words of "what is X" content might help; might not.
The right answer is to work backwards from the prompt to the answer.
If the model says the best CRM for solo founders is Vendor A, B, and C — read the actual response. What does it praise about each one? "Vendor A has a generous free tier." "Vendor B has the best mobile app." "Vendor C integrates with Slack."
Now you know what the model knows. Each phrase is something the model has read enough times during training that it's confident enough to assert it.
Your job is to seed the next training cycle with the same kinds of statements about your brand, in places the model will read. That means:
- Authoritative pages on your own site that state, plainly, what you are. Not aspirational marketing copy — declarative product copy. "We are a CRM for solo founders. We have a free tier of X. We integrate with Y."
- Listings and round-up pages that include you among the comparison set. Even mid-tier "10 best X tools" articles get scraped and absorbed.
- Content on third-party sites — guest posts, podcast appearances, citations in industry reports — that uses the same factual phrasing. Repetition is what gets a fact into the model.
The "closed loop" — and the differentiator we built into Vyrable's AI Knowledge Baseline tracker — is a one-click button on each underperforming prompt that drops a content brief on your kanban with the gap baked in. Mention rate, top competitors named, the language they're being praised in. Generate, publish, re-measure on the next scan.
What the leaders are doing
The companies that are visible in AI answers in 2026 fall into two camps.
The first camp is the obvious one: brands so big and so well-covered that the model couldn't avoid mentioning them. HubSpot. Stripe. Shopify. They have decades of organic mentions in everything the model trained on.
The second camp is more interesting: brands you've maybe heard of, but only just, that nonetheless appear consistently. Look closely and you'll find a pattern. They have a clear category positioning. They have published comparison and alternatives pages. They show up in industry round-ups. They have a podcast or interview presence that landed them in transcribed corpora.
The first camp is impossible to copy. The second is a playbook.
Tracking it long-term
Once you know whether you're visible — once you've run the measurement and you've started the work of being more visible — the question becomes: are you making progress?
That's where a tracker earns its keep. A spreadsheet of monthly scores is a start. A real platform that runs the prompts on a daily cadence, alerts you when scores drop, breaks down per-LLM coverage (because ChatGPT and Claude often disagree), and tracks competitor share-of-voice over time — that's the version that actually changes behaviour, because the data shows up before you'd otherwise notice.
Vyrable's AI Knowledge Baseline does this on a tier-appropriate cadence: 5 prompts weekly on the free tier, 30 daily across two LLMs on Pro, 60 daily across three LLMs on Agency. Every signup gets a 14-day Pro trial included.
But the platform is the smaller half of the answer. The bigger half is the discipline of treating AI visibility as a metric that matters. Most teams haven't yet — which is exactly why the next twelve months will reward the teams that do.
Where to start
Three steps, in order:
1. Run a baseline scan. Five prompts, your brand, your top three competitors. Either by hand against ChatGPT, or via the free AI visibility check we built for this exact purpose.
2. Pick the worst-performing prompt. Not the one with the most generic wording — the one most aligned with your actual buyer's question, that you scored zero on.
3. Write the canonical answer. A page on your site, an article, a guest post — phrased declaratively, naming what you are, who you serve, why you're a good fit.
Re-scan in 30 days. The data will tell you whether the work is working.
The teams who start now will, on average, be visible by 2027. The teams who wait will spend 2027 explaining to their boards why nobody's heard of them.