The 33-point gap that changes everything
In May 2026, the PharmaGEO public index measured answer rates for every major brand across four therapeutic areas, three AI engines, and three languages. One number stood out above all others.
Adbry (lebrikizumab) in the Atopic Dermatitis category posted an Answer Rate of 41.4% in OpenAI — a top-four result in one of the most competitive inflammatory dermatology fields. That same brand, that same therapeutic area, that same week, posted an Answer Rate of 8.2% in Perplexity — a rank-8 niche mention. The gap between those two numbers is 33.2 percentage points.
That gap is not a rounding error. It is not a prompt-wording artifact. It is a structural property of how different AI engines retrieve, weight, and synthesize medical content. The operational consequence is direct: any pharma brand team treating "AI visibility" as a single unified metric is measuring the wrong thing. There are six distinct visibility surfaces — one per major engine, one per major language — and they can swing a brand from category leader to afterthought depending on where the query lands.
Six engines. Six distinct ranking systems.
The PharmaGEO public index tracks three major generative engines — OpenAI (ChatGPT), Google Gemini, and Perplexity — across English, French, and Spanish. Each engine is a distinct visibility surface. They pull from different source pools, apply different retrieval logic, and produce systematically different brand rankings even when queried with identical prompts. Below are the three most important structural patterns that distinguish them.
Pattern 1: Gemini amplifies the category leader
Gemini's retrieval architecture produces a winner-take-most dynamic that is absent in OpenAI. Across every therapeutic area in the May 2026 index, the top-ranked brand in Gemini holds materially more Share of Voice than the same brand in OpenAI — not because the brand is more visible, but because Gemini surfaces fewer competing products per response.
| Therapeutic Area | Top Brand | OpenAI SOV | Gemini SOV | Delta |
|---|---|---|---|---|
| Atopic Dermatitis | Dupixent | 16.3% | 23.5% | +7.2 pp |
| Obesity | Wegovy | 22.2% | 34.3% | +12.1 pp |
Source: May 2026 PharmaGEO public index.
A brand that is already dominant gets more dominant in Gemini, while a strong number two or three may see its Gemini SOV compress relative to its OpenAI position. For brands in tight three-way competitive fields — Atopic Dermatitis, Psoriasis — the Gemini concentration effect can silently erode what looks like a healthy competitive ranking in aggregate reporting.
Pattern 2: Perplexity surfaces the long tail
Perplexity is the most retrieval-diverse of the three engines. OpenAI's public Lung Cancer table shows 10 distinct products with non-zero Share of Voice; Perplexity surfaces 29 in the same therapeutic area. In Psoriasis, Perplexity shows 19 products where OpenAI shows 10. In Atopic Dermatitis, Perplexity's top-10 includes Neoral (5.2%) and Elocon (4.1%) — generic-era products that OpenAI omits in favor of newer JAK inhibitors.
Perplexity is the engine most likely to surface legacy products, off-patent molecules, and specialist-tier therapies that recency-weighted LLMs deprioritize. A brand with decades of peer-reviewed literature — even a brand whose patent has expired — can hold meaningful Share of Voice in Perplexity while barely appearing in OpenAI or Gemini.
The inverse is also true. A recently approved brand, with limited publication depth and a short citation history, will underperform in Perplexity relative to its OpenAI ranking. The novelty penalty is real, and it is engine-specific.
Pattern 3: Rank order is not conserved across engines
The most disorienting finding from the May 2026 index is that engines do not merely shift scores — they reverse competitive rank. In the Psoriasis therapeutic area, Skyrizi holds the number-two position in OpenAI at 9.4% SOV, ahead of Tremfya at fourth (8.1%). In Perplexity, those positions flip: Tremfya is second at 10%, while Skyrizi drops to sixth at 7% — a four-rank swing on one brand between two engines of equivalent maturity.
In plain terms: a brand team reporting "we are number two in AI" is reporting a fact that is true for one engine and false for another. The aggregate is misleading. The engine-level breakdown is the only defensible unit of analysis.
Why engines diverge: the source-mix explanation
Engine divergence is not random. It traces directly to the source pools each engine cites. The May 2026 PharmaGEO public index includes citation-use data for Perplexity across four therapeutic areas — the result is a stark picture of how fundamentally different the information diet is by category, and therefore by engine.
| Therapeutic Area | Top Cited Sources (Perplexity, May 2026) | Source Archetype |
|---|---|---|
| Atopic Dermatitis | aad.org (168), pmc.ncbi.nlm.nih.gov (94), aafp.org (92) | Society guideline + literature |
| Lung Cancer | nccn.org NSCLC v5.2025 PDF (136), nccn.org landing 2026 (64), NEJM KEYNOTE-189 (16) | Guideline monopoly (~218 of 258 uses = NCCN) |
| Psoriasis | pmc.ncbi.nlm.nih.gov (154), aad.org (114), psoriasis-hub.com (70), nice.org.uk (40) | Literature + society + specialist hub |
| Obesity | accessdata.fda.gov Zepbound label (36), accessdata.fda.gov Wegovy label (32), novo-pi.com Wegovy PI (26) | Regulatory + prescribing information |
Source: May 2026 PharmaGEO public index, Perplexity citation-use counts.
These are not differences in emphasis. They are entirely distinct information architectures. Oncology answers are built almost entirely from NCCN guidelines — approximately 218 of 258 Lung Cancer citation uses trace to nccn.org. Obesity answers come from FDA prescribing labels and manufacturer prescribing information. Inflammatory dermatology draws from a mixed society-literature-specialist-hub pool.
Why this invalidates the universal source playbook
The most common GEO error in pharma is treating "source optimization" as a single workstream. The May 2026 source data shows there is no universal source playbook. The right strategy in Lung Cancer is NCCN alignment. In Obesity, it is FDA label and PI optimization. In Psoriasis, it includes specialist hubs like psoriasis-hub.com and nice.org.uk that barely appear in any other TA's citation data.
An Obesity brand team that invests in society guideline coverage — the right strategy for Atopic Dermatitis — is optimizing for a source type Perplexity barely cites in their category. Engine divergence and source divergence are the same problem from two angles.
The citation graph is not stable
Even within a single engine, the citation sources are not fixed. According to the Digital Bloom 2025 AI Citation & LLM Visibility Report, the monthly volatility in the AI citation graph is 59.3% — more than half the source URLs cited in any given month were not cited the prior month. This is not gradual drift. It is a structurally unstable system.
The same report finds that the domain overlap between ChatGPT citations and Perplexity citations is only 11%. Nine in ten domains cited by one engine are not cited by the other. The implication: earning a citation in Perplexity provides almost no carryover benefit to OpenAI citation presence, and vice versa. These are separate ecosystems requiring separate content strategies.
For brand teams accustomed to SEO, where a high-authority backlink benefits ranking across Google's unified index, this is a significant adjustment. There is no unified AI index. There are six citation graphs — three engines, two primary language pools — that intersect minimally and evolve month by month.
Brand mention depth still predicts citation likelihood
Despite source instability, one signal shows consistent predictive power. An Ahrefs study of 75,000 brands (December 2025) found that web-wide brand mention volume correlates with AI Overviews citation at r = 0.664. YouTube mentions showed an even stronger signal at r ≈ 0.737. Brands with thin mention footprints across the open web face a structural citation disadvantage regardless of which engine is queried.
Broad-based digital presence — scientific publications, press coverage, disease-state mentions, society guideline inclusions, regulatory document indexation — functions as a citation reservoir. Each engine draws from it differently, but a brand with a deep reservoir outperforms a brand with a shallow one across all six visibility surfaces, even when the draw ratios diverge.
What does engine-by-engine look like in practice?
The Skyrizi/Tremfya psoriasis flip is instructive as a worked example of what engine divergence means for competitive intelligence. In OpenAI, Skyrizi holds the second rank in Psoriasis SOV at 9.4%, with Tremfya at fourth (8.1%). A brand team monitoring only aggregate AI visibility or only OpenAI would conclude they hold a consistent competitive edge over Tremfya. In Perplexity, Tremfya is second at 10%, and Skyrizi has dropped to sixth at 7% — a four-rank reversal. The competitive landscape in Perplexity is the mirror image of the competitive landscape in OpenAI.
The Adbry gap — 41.4% in OpenAI, 8.2% in Perplexity — is a more extreme version of the same pattern. Engines disagree on brand rank, on which brands appear at all, and on the sources that supply the evidence for their answers.
The long-tail problem for specialist brands
Perplexity's tendency to surface 29 products in Lung Cancer versus OpenAI's 10 creates a specific strategic question: which engine matters most for a specialist oncology brand with moderate market presence? If the target HCP audience skews toward Perplexity — a reasonable hypothesis for physicians who use real-time web search integrated with AI — the brand's ranking in Perplexity's 29-product field may be the more commercially relevant metric, even if the brand appears nowhere in OpenAI's top 10.
A consumer-facing brand in Obesity or Atopic Dermatitis, where patients use ChatGPT for general health queries, needs OpenAI visibility as the primary target, with Gemini's concentration effect as a secondary consideration. The engine that matters depends on who is asking and where — a segmentation exercise with no equivalent in traditional SEO.
Implications for brand team playbooks
The May 2026 data points to four actionable shifts for brand teams managing GEO in competitive therapeutic areas.
1. Measure six numbers, not one
Answer Rate and Share of Voice must be reported by engine and primary language — not as a blended aggregate. A single score conceals the divergence that determines whether GEO investment is working. The minimum reporting unit: OpenAI English, Gemini English, Perplexity English, plus the same three in the brand's primary market language. Six numbers. Six trend lines. Six competitive benchmarks.
2. Build source strategies by TA, not by brand
Citation authority is category-specific. An Obesity brand team should prioritize FDA label optimization, PI quality, and NEJM-indexed clinical publication. A Psoriasis brand team needs PMC literature depth, AAD guideline inclusion, and specialist hub presence. Cross-TA source strategies produce cross-TA results — poor results in every category.
3. Treat Perplexity and OpenAI as distinct content targets
Given an 11% domain overlap between ChatGPT and Perplexity citations (per the Digital Bloom 2025 report), content that earns Perplexity citations does not automatically earn OpenAI citations. Content planning should include explicit Perplexity optimization — particularly for brands with strong legacy publication depth that may outperform in Perplexity's long-tail retrieval while underperforming in OpenAI's recency-weighted ranking.
4. Build citation depth as a compound asset
The Ahrefs brand mention correlation (r = 0.664, per the Ahrefs December 2025 analysis) and the Atopic Dermatitis data — where 2000s-era drugs like Protopic (29.9% Perplexity SOV), Elidel (20.6%), and Eucrisa (13.4%) retain strong retrieval presence purely through literature depth — point to the same principle: citation depth is a compound asset. Early investment in scientific publication, guideline inclusion, and regulatory document quality accumulates retrieval authority over years. Brands that treat GEO as a short-term campaign rather than a long-cycle publishing investment are building on sand.
The strategic error to avoid
The dominant error in pharma GEO today is treating engine performance as interchangeable. A brand team that reports strong OpenAI visibility, assumes Gemini and Perplexity follow, and builds one universal content plan will underperform in at least two of its six visibility surfaces — and may be ranked inversely to its OpenAI position in another.
The Adbry 33-point gap illustrates the cost of that error. A team tracking only OpenAI sees a top-four ranking and considers their GEO position strong. The Perplexity rank-8 result — 8.2% Answer Rate versus 41.4% in OpenAI — is invisible to that team, as is the fundamentally different brand landscape that Perplexity users encounter when researching Atopic Dermatitis treatments.
The citation graph volatility compounds the risk. With 59.3% monthly volatility in AI citations (per Digital Bloom), a brand's source portfolio can shift materially between reporting periods. Monitoring at the aggregate level masks these movements. Engine-level, source-level tracking catches them early enough to respond.
The six-visibility framework in summary
Each major AI engine — OpenAI, Gemini, Perplexity — is a separate visibility surface with its own source preferences, concentration dynamics, and long-tail depth. Each primary language you operate in multiplies that surface again. The intersection of three engines and two languages produces six distinct visibility scores for any brand in any therapeutic area. Those six scores may correlate weakly, moderately, or not at all. They require separate monitoring, separate content strategies, and separate competitive benchmarking.
There is no shortcut to a single AI visibility number. A brand that claims to be "visible in AI" without specifying the engine and language is describing a fraction of its actual AI presence — and concealing the fractions where it may be losing.
The engines are not converging. The source pools are not merging. Brand rankings do not agree across surfaces. Brand teams that accept this as the operating condition of 2026 are in a position to act. Those that wait for AI search to settle are ceding six separate competitive surfaces to brands that already understand the divergence.
Want a real audit on your brand? Request a sample report or get the full PharmaGEO Playbook.