Pharmacovigilance has always been about signal detection — finding the harm before it becomes a pattern. For decades the signal came from adverse event reports, yellow card submissions, and spontaneous notifications routed through regulated channels. In 2026, a new channel is generating safety signal at scale, without any submission form, without any pharmacovigilance team monitoring it, and without any pharmaceutical company designing it that way. That channel is the AI answer layer. Patients and clinicians are asking AI engines about drugs — weight-loss drugs in particular — and the engines are returning boxed warnings, contraindications, and prescribing restrictions sourced directly from FDA product labels. At the same time, those engines are surfacing T2D-indicated GLP-1 receptor agonists in response to obesity queries. The safety surface exists. The question is whether pharmacovigilance teams know they own it.

FDA labels are the dominant source in obesity AI answers

The May 2026 PharmaGEO public index tracked source citation patterns across Perplexity, OpenAI, and Gemini for obesity, atopic dermatitis, psoriasis, and lung cancer therapeutic areas. The obesity data produced the most striking safety signal: across all sources cited in Perplexity responses to obesity-related queries, the two highest-use domains were both FDA label pages on accessdata.fda.gov.

Rank Source Domain Citation uses Content type
#1 Zepbound FDA label accessdata.fda.gov 36 Full prescribing information (boxed warnings, REMS, contraindications)
#2 Wegovy FDA label accessdata.fda.gov 32 Full prescribing information (boxed warnings, REMS, contraindications)
#3 Wegovy prescribing information novo-pi.com 26 Manufacturer-hosted PI (mirrors label content)
#4 NEJM clinical evidence nejm.org 24 Peer-reviewed trials (efficacy + safety data)

Source: May 2026 PharmaGEO public index, Perplexity obesity TA, source citation tracking.

The practical implication of this table is significant. Every time a patient or caregiver asks Perplexity about obesity medications — dosing, eligibility, side effects, whether they can take a GLP-1 — the engine is pulling from documents that contain thyroid tumour warnings, pancreatitis risks, REMS enrolment requirements, and contraindications for personal or family history of medullary thyroid carcinoma. This is not a marketing interaction. This is safety communication, delivered at scale, to an uncontrolled patient population, mediated by an AI system that pharma teams did not design and do not operate.

Contrast this with other therapeutic areas in the same index. In oncology (lung cancer), the dominant sources are nccn.org clinical guidelines, which are guideline-mediated rather than label-mediated. In inflammatory dermatology, the top sources are society guidelines from the AAD and peer-reviewed literature. Obesity and metabolic disease stand apart: in these categories, the regulatory primary archetype — FDA labels, manufacturer prescribing information, EMA EPARs — is the dominant citation type, not clinical guidelines or journal articles. The AI answer in the obesity space is, structurally, a safety document delivery system.

T2D-indicated GLP-1s are surfacing on weight-loss queries

The source citation pattern is one pharmacovigilance concern. The off-label surfacing pattern is another, and it is arguably more immediately actionable for brand and regulatory teams.

The May 2026 PharmaGEO public index tracks share of voice (SOV) for brands in AI answers by therapeutic area. The obesity TA query set is designed around weight-loss intent: how to lose weight, GLP-1 medications for obesity, best weight management drugs, and similar. The approved weight-loss GLP-1s in the US — Wegovy (semaglutide) and Zepbound (tirzepatide) — hold the largest SOV positions. But they are not the only GLP-1s appearing in those responses.

Brand Approved indication Perplexity SOV in obesity TA Off-label in obesity context?
Wegovy (semaglutide) Chronic weight management (obesity/overweight) Leading SOV No — on-label
Zepbound (tirzepatide) Chronic weight management (obesity/overweight) Leading SOV No — on-label
Ozempic (semaglutide) Type 2 diabetes only ~6% Yes — off-label surface
Mounjaro (tirzepatide) Type 2 diabetes only ~3% Yes — off-label surface

Source: May 2026 PharmaGEO public index, Perplexity obesity TA, share of voice tracking.

Ozempic and Mounjaro are the T2D-indicated versions of the same molecules that power Wegovy and Zepbound respectively. The AI engines are not applying the same indication boundaries that regulators and MLR teams apply. When a patient searches for weight-loss drug options, the engine connects the cultural familiarity of "Ozempic" to the query — the molecule is the same, the popular name is prominent, the engine makes the association — and the T2D-only brand surfaces in the answer. Each such appearance is, from a regulatory standpoint, an unprompted off-label mention in a direct-to-consumer context.

This is a systemic AI behaviour, not a fault specific to any brand or manufacturer. The engine does not have a regulatory category filter. It has a relevance filter, and same-molecule brands pass that filter easily. But the pharmacovigilance implications are real: a patient reading that Ozempic is a weight-loss option who then pursues it without a T2D diagnosis is a safety and compliance exposure that currently has no monitoring mechanism within most pharma safety functions.

Why the molecule-brand distinction breaks down in AI retrieval

The root mechanism is straightforward. AI retrieval engines build associations from the aggregate of internet content. Ozempic has accumulated an enormous web presence associated with weight loss — media coverage, patient discussions, social content — far exceeding its T2D-only FDA label status. The retrieval engine weights that association. The regulatory distinction between Ozempic and Wegovy is a human-legal construct that is not embedded in the engine's training signal.

The same phenomenon may emerge in any TA where same-molecule brands hold different indications across markets or where a brand has accumulated cultural association with a condition it is not approved for. GLP-1s are the clearest current example, but oncology brands approved in specific biomarker-defined subpopulations, or immunology biologics approved in some but not all autoimmune conditions, face analogous risks as AI health query volumes grow.

The scale at which this safety surface operates

The pharmacovigilance risk calculus changes when the channel operates at the scale that AI health queries now reach. According to Spectrum Science, 230 million health questions are asked of ChatGPT every week. That volume is not distributed evenly across conditions, but even a fraction of it directed at GLP-1 and obesity queries represents patient-scale safety exposure.

On the clinical side, OpenEvidence recorded one million clinical consultations with verified physicians in a single day in March 2026 — demonstrating that the AI answer layer is not a consumer-only phenomenon. Prescribers are using AI for clinical decision support at scale, and the safety information those engines return — including any off-label associations — influences clinical behaviour in a channel that has no pharmacovigilance monitoring equivalent.

Against that scale, the accuracy signal is concerning. A 2024 BMJ Open study classified 50% of medical chatbot answers as problematic — inaccurate or materially incomplete — when reviewed by clinical assessors. Half. At 230 million weekly health queries, even a much lower error rate on drug-safety-relevant information represents a pharmacovigilance surface that would trigger signal-detection review if it were a spontaneous report channel of equivalent volume.

The gap between AI answer volume and pharmacovigilance monitoring

Traditional pharmacovigilance signal detection operates on reported data — adverse event submissions, yellow cards, regulatory filings. The AI answer layer generates no submission. When a patient receives incorrect safety information from an AI engine and acts on it, there is no automatic pharmacovigilance record unless an adverse event subsequently follows through a monitored channel. The upstream signal — the inaccurate AI answer that preceded the event — is invisible to the current monitoring infrastructure.

This is not a gap that pharmacovigilance teams created. It is a gap that the technology created. But the question for 2026 and beyond is whether pharma safety functions will choose to treat it as someone else's problem — or whether they will recognise that monitoring AI answer accuracy for their brands is within the reasonable scope of a modern pharmacovigilance function.

What the source archetype tells us about safety responsibility

The source typology from the May 2026 PharmaGEO public index shows that in the obesity TA, the dominant archetype is what we classify as "regulatory primary": FDA labels on accessdata.fda.gov, manufacturer prescribing information, and EMA EPARs. This archetype is also significant in other metabolic and safety-loaded TAs. The regulatory primary archetype is distinct from the society guideline archetype (which dominates oncology) and the peer-reviewed literature archetype (which provides the floor across all TAs).

The practical consequence: in obesity and metabolic disease, the AI answer is largely an automated delivery mechanism for regulatory content that the pharma company's own regulatory affairs function created and filed. FDA labels, EPARs, and manufacturer PIs are documents that the pharmaceutical company authored, filed, and is responsible for keeping current. When those documents become the primary citation source in AI answers, the company's regulatory content is being delivered to patients at scale through a channel it did not choose and is not monitoring.

That is a pharmacovigilance responsibility, not a marketing one.

What brand and pharmacovigilance teams should do

The practical response to this situation does not require new regulatory frameworks or new approval processes. It requires existing functions — brand, medical affairs, regulatory, and pharmacovigilance — to coordinate on a monitoring and response protocol for the AI answer layer.

Monitor the safety section of AI answers weekly

The same audit cadence used to track SOV and share of voice in AI answers should include a systematic review of the safety-relevant content those answers contain. For any GLP-1, oncology, or immunology brand with significant AI answer presence, the audit should specifically capture: whether boxed warnings are reproduced accurately; whether REMS language is correctly represented; whether contraindications are cited from the current label version; and whether any off-label indication associations appear. This is not a legal function — it is a surveillance function, and it belongs in the pharmacovigilance workflow alongside existing signal detection activities.

Coordinate with PV on inaccuracy detection protocols

When a material inaccuracy in safety-relevant AI content is identified — a missing boxed warning, an incorrect dosing contraindication, an off-label brand appearance — the detection and response protocol should involve the pharmacovigilance function, not just the brand or digital team. The pharmacovigilance team has the expertise to assess clinical significance, the regulatory relationships to escalate if needed, and the documentation infrastructure to record the detection and response. A response coordinated only by the brand team is a missed opportunity to build the regulatory documentation trail that demonstrates diligence.

Align regulatory affairs on EPAR and label retrieval optimisation

The FDA labels and EMA EPARs that dominate obesity AI citations are regulatory documents authored by the company. Regulatory affairs teams can review these documents with an awareness that they are now primary AI retrieval sources and that their structure affects how accurately AI engines represent the safety content they contain. This does not mean changing label content — that is a regulatory and scientific decision — but it does mean that the way safety information is structured and presented in these documents will affect how AI engines reproduce it. Regulatory affairs, medical affairs, and GEO specialists should review label and EPAR content together with that awareness.

Never use adverse events as marketing copy in AI-optimised content

A specific anti-pattern to prevent: some brand teams have considered whether prominent safety language in their owned content — citing boxed warnings and side effects — might paradoxically improve AI citation rates, since the FDA label is already the top-cited source. This thinking should be explicitly rejected in any GEO content strategy. Adverse event information exists in label documents for regulatory and safety purposes. Structuring owned marketing content to surface safety language for citation-rate purposes is a compliance exposure and a patient safety risk, not a GEO optimisation strategy. The coordination between pharmacovigilance and brand teams on AI monitoring exists to protect patients and maintain regulatory integrity — not to reverse-engineer safety signals into promotional assets.

Establish a cross-functional AI safety working group

The four functions most relevant to the AI pharmacovigilance surface — brand, medical affairs, regulatory affairs, and pharmacovigilance — typically operate in separate workflows with limited overlap on digital monitoring. A standing working group with a defined scope (AI answer accuracy for safety-relevant content, off-label surfacing detection, documentation) would consolidate the monitoring that each function currently lacks individually. The working group does not need to produce new approved content. It needs to produce a monitoring protocol, a detection escalation pathway, and a documentation record. Those are achievable with existing resources in each function.

The pharmacovigilance framing the industry has not yet adopted

The pharmaceutical industry has, over the past eighteen months, developed significant sophistication about the AI answer layer as a marketing and visibility challenge. SOV tracking, GEO content strategies, and citation optimisation programs are now active at many major manufacturers. The safety dimension of the same channel has received far less attention.

Pharmacovigilance as a discipline was built on the principle that safety signal can emerge from any channel through which patients or clinicians receive information about drugs — and that the responsibility for monitoring those channels belongs to the manufacturer, not to the regulator alone. The AI answer layer is a new channel meeting that definition. It delivers drug safety information at a scale that would represent a significant spontaneous report volume if it were a traditional channel. It is currently unmonitored by the functions best equipped to evaluate its safety implications.

The May 2026 data makes the case straightforwardly: FDA labels as the #1 and #2 sources in obesity AI answers; T2D-only GLP-1 brands surfacing on weight-loss queries; 230 million weekly health AI queries at population scale; and 50% of AI medical answers classified as problematic by clinical reviewers. This is not a future risk. It is a present-state pharmacovigilance surface that exists whether or not the teams responsible for drug safety have chosen to monitor it.

Want a real audit on your brand? Request a sample report or get the full PharmaGEO Playbook.