Skip to content

Citations

Understand where AI models learn about your brand - and how to influence what they say.

8 min readUpdated Apr 30, 2026
What you'll learn
  • What a citation actually is in Trakkr (and what it isn't)
  • Where citation data comes from - the AI platforms we track
  • How to read every tab on the Citations page
  • The connection between citations and visibility

Your visibility score tells you whether AI mentions your brand. Citations tell you why - and more importantly, how to change it.


What counts as a citation

A citation is a URL that an AI platform returns as a source for its answer. When you ask Perplexity "best CRM for small business," the answer text is one thing - the list of links it cites underneath is the citation. Trakkr collects every citation across every prompt you track and turns them into a searchable picture of your visibility.

That's different from a mention. A mention is when your brand name appears inside the answer text. You can be cited without being mentioned (a Wirecutter review of your product gets cited, but the answer talks about something else) and you can be mentioned without being cited (the AI names you from training data, no link).

The Citations page is about cited URLs. For mention-level analysis, head to the Perception page.


Where citation data comes from

Trakkr captures citations from the three AI platforms that natively return sources for every answer:

PlatformWhat we track
ChatGPT Search (OpenAI web search)URLs returned as sources for web-search-enabled answers
Google AI OverviewsThe reference links Google attaches to AI Overview snippets
PerplexityEvery URL in Perplexity's citation list

Each prompt you track in Trakkr is run against all three platforms. Every URL that comes back is captured, normalized (UTM stripped, domain extracted, deduplicated), and analyzed - we fetch the page, classify the source type, score the sentiment toward your brand, and extract any competitor mentions.

You'll see "ChatGPT," "Google AIO," and "Perplexity" badges throughout the Citations page. Filter by them to see which platforms are driving (or missing) your coverage.

Tip
Other models like Claude or stock ChatGPT (without web search) generate answers from training data and don't return citations. They influence visibility - but Citations specifically tracks the platforms that show their work.

The source problem

Here's what most people don't realize about AI search: every response is built from sources. When ChatGPT recommends "the best running shoes for marathons," it's not making that up. It's drawing from Wirecutter reviews, Runner's World articles, Reddit threads, and thousands of other pages it has encountered.

The same is true for Perplexity, Claude, and Gemini. Different models, different training data, different real-time retrieval - but they all rely on sources.

Your visibility is a downstream effect of your citation profile.

If Nike.com appears in 67% of running shoe prompts, it's because Nike shows up in the sources those models reference. If Adidas appears in 72%, they're cited more often, on more authoritative sites, in more favorable contexts.

Tip
This is the mental shift that matters: you're not optimizing for AI directly. You're optimizing for the sources that AI trusts. Get mentioned on the right sites, in the right way, and AI visibility follows.

How AI models use sources

Not all citations are created equal. Understanding how models weight sources helps you prioritize.

Training data citations

ChatGPT, Claude, and Gemini all trained on massive datasets - essentially a snapshot of the internet. Content that existed during training is baked into the model's knowledge.

What this means for you:

  • Older, established coverage persists even if articles disappear
  • Getting mentioned in widely-linked content has lasting effects
  • Major publications carry more weight than obscure blogs

Real-time retrieval

Perplexity is different. It searches the web in real-time for every query. SearchGPT works similarly. This means recent content matters more.

What this means for you:

  • Fresh content can impact visibility immediately
  • News coverage shows up faster on retrieval-based models
  • You can see changes from new mentions within days, not months

The authority signal

All models weight authoritative sources more heavily. A mention on TechCrunch influences AI responses more than a mention on a random blog. Domain authority, backlinks, and content quality all factor in.

Source TypeAI WeightWhy
Major publications (NYT, TechCrunch)Very HighWidely cited, high authority
Review sites (G2, Wirecutter)HighProduct-focused, trusted
Industry publicationsHighNiche authority
WikipediaVery HighCrawled extensively, treated as fact
Reddit/forumsMediumReal user opinions, but variable quality
Company blogsMediumFirst-party content, potential bias
Small blogsLowLess reach, less authority

Citations drive three things

1. Whether you appear at all

If no authoritative source mentions your brand in the context of "best CRM software," AI won't recommend you for that query. Simple as that.

The gap problem: When Nike checks the Heatmap and sees that Runner's World cites Adidas and New Balance but not Nike for "best trail running shoes" - that's a gap. Nike is invisible on that source for that topic. AI models drawing from Runner's World won't mention Nike.

2. How you're positioned

Being mentioned isn't enough. How you're mentioned matters.

Consider these two scenarios for the same brand:

Scenario A: "HubSpot is the leading CRM platform for growing businesses, with best-in-class marketing automation."

Scenario B: "HubSpot is an alternative to Salesforce for teams who can't afford enterprise pricing."

Both are citations. Both cause the brand to appear in AI responses. But one positions you as a leader, the other as a budget option. AI models reflect this framing.

3. Your sentiment score

Trakkr analyzes whether citations position you positively, neutrally, or negatively. Sentiment matters because:

  • Negative citations can drag down overall AI perception
  • Positive citations amplify when multiple sources agree
  • Mixed sentiment creates hedged AI responses ("Some users report...")

The citation improvement cycle

Improving your AI visibility isn't a one-time audit. It's an ongoing cycle.

1
Discover
2
Analyze
3
Act
4
Measure

1. Discover your citation landscape

Where do you appear? Where don't you? Which sources cite competitors but not you?

This is what the Citations page shows you. Start here.

2. Analyze what's driving your scores

Look at the specific content. Are high-authority sources positioning you well? Are there gaps on sites that matter? Is there negative coverage you need to address?

3. Act on what you learn

This might mean:

  • Creating content that earns citations naturally
  • Reaching out to publications for reviews or inclusion
  • Responding to negative coverage
  • Getting listed on comparison sites

4. Measure the impact

Run research again. Did your gaps close? Did new positive citations appear? Are you winning on prompts you were losing before?

Then repeat. Brands that treat this as continuous see the best results.


What Citations shows you

The Citations page has six views, accessible via the tab bar. Use the ?view= URL parameter to link directly to any view.

Sources (default)

"Who's talking about me, and what are they saying?"

The split-pane explorer shows every website and publication that cites your brand or competitors. Click any source to see:

  • Exact quotes and context
  • Sentiment analysis
  • Which AI models use this source
  • How competitors are covered

Deep dive into Sources →

Pages

"Which specific pages are AI models citing most?"

A flat, cross-domain page-level view. While Sources groups by domain, Pages shows individual URLs ranked by citation influence - the actual content AI draws from.

Deep dive into Pages →

Queries

"What questions lead to citations about me?"

See the search queries that trigger citations. This reveals user intent and content opportunities:

  • Discovery queries ("best project management tools")
  • Comparison queries ("Notion vs Confluence")
  • Problem queries ("Notion slow loading fix")

Deep dive into Queries →

Feed

"What changed in my citation landscape?"

A chronological changelog of citation events - new pages appearing, lost citations, sentiment shifts, and competitor movements. Think of it as your citation activity feed.

Deep dive into Feed →

Heatmap

"Where do I win vs competitors across all sources?"

Visual grid of your citation coverage vs competitors. Green means you're cited, red means competitors are cited and you're not. Patterns become obvious:

  • Vertical green stripe = strong coverage
  • Horizontal red row = priority gap on a single source
  • Scattered red = multiple opportunities to close

Deep dive into Heatmap →

Outreach

"Which gaps should I prioritize?"

Prioritized list of sources worth pursuing, ranked by potential impact. Includes contact information and submission guidelines when available.

Deep dive into Outreach →


Quick wins

Three things you can do today:

1. Find your highest-impact gap

Go to Citations → Heatmap. Filter to "Gaps only." Sort by authority. The first red cell you see is your biggest opportunity.

2. Understand your positioning

Go to Citations → Sources. Click your top-cited source. Read what they actually say about you. Is this how you want to be positioned?

3. Check competitor coverage

Pick your main competitor. How many sources cite them but not you? That number is your opportunity list.


The bigger picture

Citations connect to everything else in Trakkr:

  • Dashboard visibility comes from citation coverage
  • Competitor rankings reflect relative citation strength
  • Prompt performance ties back to which sources cite you for which topics
  • Content opportunities emerge from citation gaps

When you wonder "why is my visibility what it is?" - Citations has the answer. When you ask "how do I improve?" - Citations shows the path.

Tip
Most users start with Sources (depth) and Heatmap (breadth), then use Pages and Feed to stay on top of changes. Queries reveal intent, and Outreach turns gaps into action items.

Was this helpful?

Press ? for keyboard shortcuts