Step to English

Visit site
In progress

Step to English: building AI-search visibility for an EFL brand in Kazakhstan

Step to English is an English-language school operating in Kazakhstan and the wider CIS region. They had a competitive product and a working website, but were invisible to ChatGPT, Perplexity, Gemini, and Google AI Overviews when prospective students asked for English-school recommendations. This is what we've done in the first 30 days — written while the engagement is still active.

GEO Foundations Technical SEO Entity disambiguation Schema markup Wikidata + Wikipedia

At a glance · Before / after

Citable Checker score

40 / 100 (grade C) Re-test scheduled day 45

Baseline documented

Share of Answer · ChatGPT

0 / 50 prompts Re-test scheduled day 45

Baseline documented

Share of Answer · Perplexity

0 / 50 prompts Re-test scheduled day 45

Baseline documented

Story overview

Citable started a GEO Foundations engagement with Step to English on 13 April 2026. Day-zero diagnostics: AI Visibility Checker scored 40/100 (grade C), and the 50-prompt Share of Answer set returned 0 citations across ChatGPT, Perplexity, Gemini, and Google AI Overviews. This case study documents what we've shipped through week 4 and what we're measuring next. We will update it monthly until the engagement closes.

About Step to English

Step to English is an English-language school based in Almaty, Kazakhstan, serving students across Kazakhstan and the wider CIS region. They run in-person classes in Almaty and a fully online programme for students elsewhere in the country and across Central Asia. The site is on Next.js, hosted on Vercel, with Russian and Kazakh as the primary content languages and English progressively added.

The brand was launched with the right instincts on traditional SEO — the Russian-language pages rank in the top organic positions for the queries that matter in their category. The problem we were called in to solve was different.

The background — what is happening to “best [X] in Almaty” queries

In Kazakhstan, English-learning demand is growing rapidly and a meaningful share of the buyer-research journey has moved into AI assistants. Parents and adult learners type prompts like “какая лучшая школа английского в Алматы”, “best English school in Kazakhstan”, “online English with native teachers in CIS” — and they take the names that ChatGPT, Perplexity, Gemini, and Yandex Neuro give them. The category was being shaped by AI answers that named 2 to 4 incumbent brands consistently, none of which was Step to English.

This is the same pattern we’ve seen in every market we work in: the AI layer compounds advantage to whichever brand was first to become citable, regardless of who has the better product on the ground.

The challenge

We ran the AI Visibility Audit at engagement kickoff. The 50-prompt Share of Answer set was assembled from real student-research queries (Russian, Kazakh, and English, weighted by language share). The baseline:

  • Share of Answer: 0% on all four major AI surfaces. Step to English never appeared in any answer for any of the 50 prompts.
  • Citable Checker score: 40 / 100, grade C. The site had clean speed and clean Organization schema, but everything else flagged: no llms.txt, no Wikipedia article, no Wikidata entity, no sameAs chain, AI crawlers blocked at the robots.txt layer.
  • Entity confusion. “Step to English” returns multiple unrelated entities — language apps, learning blogs, generic phrases. The model had no way to disambiguate which “Step to English” the user meant.
  • Schema gaps. Organization schema existed but no Course, no FAQPage, no LocalBusiness, no Service. The pricing and curriculum information that AI models extract for “best English school” answers was not structured to be extractable.

Diagnosis was clean: this was a structural visibility problem, not a content or authority problem. The product is competitive. The brand was simply not in the form an AI model can quote.

What we did in the first 30 days

The engagement is GEO Foundations, structured as a 90-day programme. The first 30 days are the structural-fix phase. Here is exactly what shipped.

Week 1 — Crawler access + day-zero measurement

Day one of any GEO engagement: confirm the AI crawlers can actually read the site. Step to English had a permissive robots.txt for Google but no explicit allow for ChatGPT-User, PerplexityBot, ClaudeBot, or Google-Extended. Without that, the site was invisible to the retrieval layer of three of the four target surfaces.

We shipped an updated robots.txt with explicit allows for all four AI bots and verified each bot could fetch a representative sample of pages. We also documented the day-zero Share of Answer baseline (0 / 50 across four models) so the delta at day 90 has a clean comparison.

Week 2 — Schema coverage from Organization to Course

The Next.js stack made this clean. We added structured schema in this order:

  • LocalBusiness with full address, opening hours, contact, and sameAs referencing the brand’s social profiles. This is the schema AI models lean on for “best [X] in [city]” prompts.
  • Course schema on every programme page, including duration, level (CEFR-aligned), delivery mode (online vs in-person), instructor, and price. Course schema is genuinely undersupplied in the EFL category in the region — most competitors don’t use it.
  • FAQPage on the homepage, programme pages, and the “About” page. The questions were sourced from the prompt set itself, so the model can quote our answers directly when those prompts run.
  • BreadcrumbList sitewide to help the model reconstruct the site taxonomy.

All schema validated through Google Rich Results, Schema.org validator, and Perplexity’s structured-data parse (the latter via manual prompt-and-cite testing).

Week 3 — Entity disambiguation: Wikidata + Wikipedia

This is the slow lever, and the one that compounds the longest. Step to English did not exist in Wikidata at all. Without a Wikidata entity, the brand is permanently invisible to anything that builds on Google’s Knowledge Graph (which means Gemini, AI Overviews, and any model that ingests Wikidata directly).

We created the Wikidata entity, sourced it properly (registration documents, news mentions, founding details), and added the sameAs chain pointing to social profiles, the brand site, and the LinkedIn company page. Wikipedia is in flight — drafting a notability-compliant article with Russian and English versions, sourced to independent media coverage. Wikipedia is the longer arc; we expect it to land between week 6 and week 10.

Week 4 — llms.txt as editorial map

We shipped /llms.txt at the root. The file is in English, structured in H2 sections matching the buyer-research arc: “Programmes”, “Locations”, “Pricing”, “Teachers”, “Frequently asked questions”. Each entry is a 40–80 word description of the linked page, written to be quotable. This is the layer that accelerates extraction on ChatGPT-User and PerplexityBot — both of which now read llms.txt as part of their content-graph build.

What we are measuring next

The structural work is shipped. The first re-measurement runs at day 45 (28 May 2026): same 50-prompt set, same four surfaces, same methodology. We expect first movement on Perplexity earliest (typically 30 days after the work lands), then AI Overviews, then Gemini, then ChatGPT last (60–90 days). The 90-day re-measurement is where the real delta becomes visible.

We will update this case study with the day-45 and day-90 numbers as they land. No projections. No commitments. The measurement is the report.

Why this case matters

Three things make this engagement instructive beyond the client itself:

  1. It is not anonymized. Most GEO case studies in 2026 are. Step to English agreed to a public engagement because the methodology is the value, not the secrecy. You can verify the baseline yourself by running our AI Visibility Checker against steptoenglish.kz today, and again at day 45 and day 90.
  2. The market is non-English. A lot of the GEO playbook is written for US/UK B2B SaaS. Step to English is a Russian-and-Kazakh-language brand in Central Asia. The same structural levers apply, but the prompt-set construction, the Wikipedia notability bar, and the sameAs chains all need local context to work.
  3. It is a small business. GEO is not only for enterprise. The structural fixes shipped here — schema, entity, llms.txt, crawler access — are inside the reach of a single-developer-team operation. The work compounds quietly for months after it lands.

If you are a small or mid-market business in a non-English market wondering whether GEO works for you, the answer is yes — and this case study is the live evidence as it accumulates.

“We were ranking on Google in Russian and Kazakh, but when a parent asked ChatGPT for the best English school in Almaty, we did not exist. That is the gap Citable came to close.”

— Step to English

Marketing lead, Almaty

More numbers

Wikidata entity

Did not exist Created, sourced, live

Week 3

Schema coverage

Organization only Organization + Course + FAQPage + LocalBusiness

Week 2

Crawler access (AI bots)

Robots.txt allowed Google only ChatGPT-User · PerplexityBot · ClaudeBot · Google-Extended

Week 1

Ready to be cited by AI?

Two paths in. Free check tells you where you stand in 10 seconds. Paid audit tells you exactly what to fix, with a baseline you can measure forward from.

Run the free check Book the audit · €1,200

Prefer to talk first? Get in touch