← Work / April 2026
From invisible to cited by ChatGPT in 90 days
A Series-A devtools company was returning zero citations in ChatGPT and Perplexity for the 50 prompts their ICP actually types. Ninety days later they appeared in 17 of those 50 — including 9 of the top 12 commercial-intent prompts.
- Client
- B2B SaaS · Series A (anonymized)
- Sector
- DevTools / API infrastructure
- Market
- United States + United Kingdom
- Engagement
- GEO Foundations + SEO Continuity
Documented delta · 90 days
Share of Answer · ChatGPT
+34 pts
Share of Answer · Perplexity
+39 pts
Share of Answer · Gemini
+22 pts
AI Overviews appearance rate
AI referral traffic (sessions/mo)
Demo bookings attributed to AI source
The situation
The client is a Series-A B2B SaaS company building developer infrastructure in a category dominated by two well-known incumbents and a handful of well-funded competitors. Their traditional SEO position was strong — they ranked in the top 5 for most of their commercial-intent keywords and had been investing in organic content for two years. Their AI search position was zero. When their ICP asked ChatGPT, Perplexity, or Gemini “best [category] tool” or “[their category] alternative to [incumbent]”, the answers consistently named the same 3–4 competitors. Their brand never appeared.
Internally, this was diagnosed as a content problem and a “we just need more links” problem. Both diagnoses were wrong.
The audit findings
The AI Visibility Audit (50 prompts × 4 models) surfaced three specific structural causes:
- Entity confusion at the source. Their LinkedIn company page, Crunchbase listing, and homepage
og:descriptiondescribed their product in three different ways. The model had no canonical “what this brand is” to lock onto. When it had to pick something, it picked nothing — falling back to the brands it could disambiguate cleanly. - Schema fragmentation. They had Organization schema, but no Service or Product schema, no FAQPage on any page, no
sameAsreferences to authoritative sources. The two competitors that were getting cited had completesameAschains pointing to Wikipedia, GitHub, Crunchbase, and Wikidata. - Content extractability failure. Their pillar content was structured for human readers and SEO — long intros, narrative storytelling, the answer buried 1,200 words deep. AI extractors gave up before reaching it.
Citations from competitors were not happening because of authority. They were happening because the competitors’ pages were structurally easier to extract.
What we did, in order
Weeks 1–2 — Entity disambiguation. Rewrote the homepage <title>, og:description, JSON-LD description field, LinkedIn company tagline, and Crunchbase one-liner to use identical language describing the product. Added Wikidata entry. Added complete sameAs array referencing Wikipedia (the client had a stub article we expanded with proper citations), GitHub, Crunchbase, LinkedIn, and Wikidata.
Weeks 2–4 — Schema deployment. Shipped Organization, Service (with hasOfferCatalog for each pricing tier), Article, FAQPage, and BreadcrumbList JSON-LD across the entire site as part of the templating layer (their site was on Next.js, which made this clean). FAQPage covered the 30 highest-intent questions surfaced by the audit.
Weeks 4–8 — Content extractability rewrites. Took the top 20 pages by traffic and rewrote the opening 200 words of each to lead with the direct answer in declarative sentences. Restructured H2s into question-format headings that matched the AMA-style prompts the audit ran. Kept the long-form depth below the fold for human readers, but front-loaded the extractable layer.
Weeks 8–12 — Citable content production. Shipped four cornerstone pieces, each engineered as a “one-piece-many-prompts” answer for question clusters the audit identified as winnable. Each piece had FAQPage schema in addition to Article schema.
What changed at month 3
We re-ran the same 50-prompt set on day 90. The result was the table at the top of this page.
The fastest mover was Perplexity, which retrieves more sources per response and updates its index quickly — it picked up the schema and content changes within 4 weeks. ChatGPT moved second, primarily on prompts where the new sameAs chain and the rewritten Wikipedia stub gave it confident entity grounding. Gemini was slower but caught up by week 10. AI Overviews moved last — Google’s index needed the full 90 days to reflect the changes.
The 11 demo bookings attributed to AI sources (tracked via a discovery-call referral question) at typical SaaS contract values comfortably paid for the entire engagement before month 4 invoicing.
What we’re tracking next
The work isn’t done. Per their GEO Growth retainer, we’re now in monthly delta reporting and continuing to ship 2 citable pieces per month plus 3–6 digital PR mentions targeting industry publications the audit flagged as authoritative weight. The next 90 days target is 50% Share of Answer on Perplexity and 45% on ChatGPT for their core 50 prompts.
Client identifying details have been anonymized at the client’s request. This case study is shared with their permission. We do not publish brand-specific case studies for active retainer clients during the engagement; only after retainer renewal or with explicit consent.