Your 2024 SEO competitive map is wrong twice over. It is wrong once because it was built for search engines that no longer capture the majority of HCP information queries, and wrong a second time because the AI systems that replaced them do not agree with each other. The same brand can rank second on one engine and sixth on another — in the same therapeutic area, in the same week. And that rank shift happens again, with different brands, as soon as you change the query language from English to French.
The May 2026 PharmaGEO public index, covering four therapeutic areas, three major AI engines, and three languages, makes this concrete. The data documented below are not hypothetical — they are observed, scored, and reproducible competitive intelligence gaps that no traditional monitoring stack can see.
The 4-rank engine flip: why your SOV map is already obsolete
The most striking single finding in the May 2026 PharmaGEO public index is what we call the engine flip — a case where two engines of equivalent maturity rank the same drug class members in inverse order. In the psoriasis therapeutic area, OpenAI scores Skyrizi at SOV rank 2 (9.4%) and Tremfya at rank 4 (8.1%). Switch to Perplexity and the order reverses: Tremfya moves to rank 2 (10%) while Skyrizi falls to rank 6 (7%). A four-position swing on Skyrizi between engines, within a single IL-23 inhibitor class, on the same week of measurement (insight A4, May 2026 PharmaGEO public index).
Both figures represent share of total brand-token surface area in AI answers to psoriasis queries — a direct measure of how much of the AI-mediated competitive conversation each brand occupies. A brand team relying on a single engine to monitor competitive share of voice is watching a competition that does not exist in the same form on the platform their next HCP is using. The engine divergence extends further when you look at breadth: in lung cancer, Perplexity returns non-zero SOV for 29 distinct products; OpenAI's visible table caps at 10 (insight A3). In psoriasis: 19 visible on Perplexity, 10 on OpenAI. If your brand sits outside the top 10, your OpenAI monitoring tells you nothing about the share you are losing to the nine brands Perplexity is surfacing that it is not.
The 33-point brand gap: same TA, same week, different engine
The engine divergence is not limited to rank order. It extends to whether a brand is surfaced at all at competitive scale. In the atopic dermatitis therapeutic area, the May 2026 PharmaGEO public index records an Answer Rate (AR) for Adbry (lebrikizumab) of 41.4% on OpenAI, placing it at rank 4 in that engine. On Perplexity, the same brand's AR is 8.2%, placing it at rank 8 (insight A1). A 33.2 percentage point gap between two engines on the same brand, same TA, same week of measurement.
An AR of 41.4% means Adbry is named in four out of every ten OpenAI responses to atopic dermatitis queries. An AR of 8.2% means it appears in fewer than one in ten Perplexity responses. These are not equivalent competitive positions. A brand team optimising for OpenAI citation and assuming Perplexity mirrors the result is operating with a blindspot that covers more than 30 percentage points of competitive reality.
According to Digital Bloom's 2025 AI Citation and LLM Visibility Report, the domain overlap between ChatGPT and Perplexity citations is only 11%. The two engines are not drawing from the same source pool. A content strategy built to win on one will not automatically win on the other, and competitive monitoring on one will not predict the other.
The language layer: your competitive map reorders again in French
Cross-engine divergence is the first blindspot. The second is cross-language divergence. The May 2026 PharmaGEO public index includes English, French, and Spanish measurements for each therapeutic area, and the brand ranking in the French Top 10 is materially different from the English Top 10 in ways that matter competitively.
The clearest example: Ebglyss does not appear in the English atopic dermatitis Top 10 at all (English ranking ends at Olumiant at 10.3% AR). In the French Top 10, Ebglyss appears at rank 8 with a 20.0% AR (insight B3, May 2026 PharmaGEO public index). The molecule is lebrikizumab — the same active ingredient as Adbry in the US — but marketed under the Ebglyss brand name in the EU by a different company. AI engines treat Adbry and Ebglyss as distinct entities because the indexed content about them is in different languages and references different brand names, different approval pathways, and different clinical contexts. A competitive analysis built on English-language data has no visibility into the 20.0% AR that Ebglyss holds in French-language AI answers.
This is not a fringe case. The French and Spanish patient and HCP populations represent tens of millions of people whose AI answers about atopic dermatitis will include a brand that does not exist on the English competitive map at all.
Blindspot audit table: what to check, what to score, on which engine
The competitive blindspot problem requires a structured audit protocol that runs across engines and languages, not a single-platform monitoring subscription. The following table defines the minimum audit scope for a thorough competitive analysis in any major pharma therapeutic area.
| Audit dimension | What to score | Priority engine(s) | Blindspot risk if skipped |
|---|---|---|---|
| Disease-state landscape queries (unbranded) | Which brands are named, in what order, with what SOV share | All three: ChatGPT, Perplexity, Gemini | Missing competitor dominance in the query type most HCPs use for treatment decisions |
| Engine-level rank comparison (same query set) | SOV rank per brand per engine; identify flips of 2+ positions | ChatGPT vs Perplexity pair minimum | Assuming single-engine rank reflects cross-platform reality (A4 psoriasis flip: 4-rank swing) |
| Brand AR by engine (same brand, same TA) | AR gap between engines for target brand and top 3 competitors | OpenAI vs Perplexity pair | Invisible 33pp AR gap (A1: Adbry 41.4% OpenAI vs 8.2% Perplexity) |
| Tail brand visibility (brands 11+) | How many brands appear on Perplexity that do not appear on OpenAI top-10 | Perplexity (surfaces 19-29 brands vs OpenAI's 10) | Missing competitive dynamics in long-tail: 19 extra brands in lung cancer alone (A3) |
| French and Spanish-language queries | Top 10 ranking per language; flag brands absent from English list | OpenAI FR and ES | Missing EU-market brands (Ebglyss 20.0% FR AR, invisible in EN; B3) |
| Comparative framing queries (Brand A vs Brand B) | Which brand is framed as the reference point; sentiment direction | All three engines, both query orders | Undetected competitor framing advantage in head-to-head AI answers |
| Source citation overlap between engines | Which domains drive citations per engine; identify engine-exclusive sources | Perplexity (transparent citation layer) as baseline | Only 11% domain overlap between ChatGPT and Perplexity (Digital Bloom 2025); content that wins on one misses the other |
Why the Perplexity-OpenAI pair is the minimum viable audit scope
The 11% domain overlap between ChatGPT and Perplexity citations means a single-engine audit samples from a pool that shares almost nothing with the other engine's pool. Running a single-engine audit is equivalent to measuring market share in one geography and assuming it represents another. The minimum viable audit scope is the ChatGPT/OpenAI—Perplexity pair, run in English and at least one regional language (French or Spanish for EU-exposed brands). Adding Gemini as a third engine captures the recency-weighting dynamic and the concentration bias documented in A2: Gemini amplifies the category leader's SOV by 7–12 percentage points compared to OpenAI, systematically worsening the competitive position of non-leading brands on that engine.
The default brand problem and how competitive position amplifies
The most extreme version of the competitive blindspot is when one brand achieves what the data shows as category-default status — the brand named in the majority of class-level queries where no brand was specified in the question. The May 2026 concentration data shows how different this looks across TAs (insight C1, May 2026 PharmaGEO public index):
| Therapeutic area | Top-3 SOV combined | Leading brand SOV | Market structure |
|---|---|---|---|
| Obesity (OpenAI) | 53.9% | Wegovy 22.2% | GLP-1 duopoly with clear default |
| Atopic Dermatitis (OpenAI) | 45.8% | Dupixent 16.3% | Tight three-way cluster + tail |
| Psoriasis (OpenAI) | 28.2% | Cosentyx 10.4% | Crowded biologic field, fragmented |
| Lung Cancer (OpenAI) | 20.9% | Keytruda 7.6% | Highly fragmented, 26+ brands |
In obesity, the top three brands hold 53.9% of SOV, meaning the remainder compete for 46.1% distributed across seven or more brands. Gemini, which adds 12.1 percentage points to Wegovy's SOV versus OpenAI (insight A2), is an even more hostile environment for non-leading obesity brands. Once a brand achieves default status, that status compounds: higher SOV generates more third-party references, which generate more citations, which raise SOV further. The brands missing this dynamic from their monitoring are not failing to see a snapshot — they are failing to see a trajectory.
The class-level attribution blindspot and its Perplexity-specific form
The final competitive blindspot worth detailed attention is class-level attribution, and Perplexity's generic-aware citation behaviour gives it a distinctive form in that engine. Perplexity surfaces older, generic-era brands that OpenAI's top 10 omits. In atopic dermatitis, Perplexity includes Neoral (5.2% SOV) and Elocon (4.1% SOV) at positions 9 and 10 — drugs from the early 2000s with decades of literature depth (insight A3, May 2026 PharmaGEO public index). OpenAI's equivalent positions are occupied by newer JAK inhibitors.
This is not a quality-of-answer issue. It reflects a structural feature of Perplexity's retrieval: the engine follows the published literature, and older drugs have more published literature. For brands launching in categories with established generic alternatives, Perplexity competitive monitoring will reveal a competitor class (off-patent generics) that OpenAI monitoring makes largely invisible. A launch monitoring programme that only tracks branded competitors on ChatGPT will consistently underestimate the SOV that generic alternatives are accumulating on Perplexity, which is the engine increasingly favoured by medically sophisticated users who want citations with their answers.
Building the blindspot audit into quarterly workflow
The competitive blindspot problem does not resolve with a one-time audit. Digital Bloom 2025 documents 59.3% monthly volatility in the AI citation graph — more than half of what an engine cites in a given month differs from what it cited the month before. A quarterly cadence running a consistent prompt set across three engines in English plus at least one regional language, scoring brand AR, SOV rank, top-3 competitive SOV, language-layer brand presence, and source domain overlap, gives the signal required to direct content investment and syndication strategy. Brands that have built this capability are using it to make decisions that rivals relying on traditional monitoring cannot observe or counter.
Want a real audit on your brand? Request a sample report or get the full PharmaGEO Playbook.