- How has AI changed SEO over the years?
- Ranking vs appearing in AI
- Traditional SEO
- AI SEO
- How AI Search is changing traditional SEO KPIs?What are the traditional SEO KPIs?
- Why are traditional SEO KPIs not relevant anymore?
- What KPIs should I be tracking for AI Search?
- What free tools can I use to track AI Search visibility?
- AI visibility vs SEO fundamentals – the misconceptions
- The strategic reality for SMEs
If I’m honest, the past 2 years in SEO have been the most interesting and volatile I’ve seen in my career.
Not because SEO is “dead”. But because the rules of visibility have shifted, significantly. Not for better or worse, just evolved…
I’ve seen clients panic over traffic drops that didn’t affect leads.
I’ve seen brands rank #1 and still not appear in AI summaries.
I’ve seen websites gaining a fair number of impressions in Search Console, yet website clicks have not followed.
AI has reshaped what success looks like.
The “old way” of doing SEO was relatively straightforward: you picked a keyword, built a page, and climbed a list of ten blue links.
Now, Google AI Overviews summarise answers instantly. AI Mode walks users through conversations. LLMs like ChatGPT and Perplexity are increasingly the first stop for research. In many searches, the “winner” isn’t the top-ranking website anymore. It’s the brand AI chooses to mention.
According to a study, 37% of consumers begin their searches with AI tools rather than traditional search engines, and that share has steadily expanded beyond informational queries into commercial and transactional territory.
This is a massive jump considering that in 2024, all AI-powered search tools combined made up less than 2% of the market.
So, if you’re an SME, that creates a very real question:
Where should you actually focus?
In this guide, I’ll break down what’s happening in SEO (in plain English), what’s changed, what hasn’t, and what to prioritise first, so you can make smarter decisions, measure what actually matters, and choose the right partner to support you properly along the way.
How has AI changed SEO over the years?
It’s easy to think AI in search began in 2023/4. It didn’t. Google has been using and evolving machine learning in its ranking systems for years. Here is an overview and I’ll be going into detail below as well.
| Phase | Year | What Changed | What Google/AI Could Do | What It Meant for SEO |
| RankBrain | 2015 | Machine learning integrated into rankings | Interpret new/unseen queries (15% of searches daily were brand new) | Shift from exact keywords to search intent and topic relevance |
| Medic Update | 2018 | Authority & trust reassessed | Evaluate expertise, credibility, and domain-level trust (especially in YMYL sectors such as financials, care etc) | EEAT became critical. Author bios, credentials, and brand reputation mattered more |
| BERT | 2019 | Natural language processing | Understand context within sentences, especially conversational and long-tail queries | Content had to sound natural. Keyword stuffing became ineffective |
| Featured Snippets & Helpful Content Era | 2019–2022 | Focus on extractable, structured answers | Surface concise answers directly in SERPs | Clear structure, headings, and answer-first formatting became important |
| MUM (Multitask Unified Model) | 2021 | Multilingual & multimodal AI | Understand text + images across 75+ languages and connect related queries | Topical authority and content clusters became more important than isolated pages |
| LaMDA | 2022 | Conversational AI | Engage in free-flowing, context-aware dialogue | Search behaviour began shifting toward conversational journeys |
| Generative AI / SGE / AI Overviews | 2023–2025 | AI-generated summaries in SERPs | Summarise multiple sources into a single answer above blue links | Click behaviour changed. Ranking alone no longer guaranteed traffic |
| LLM Ecosystem (ChatGPT, Gemini, Perplexity, Copilot etc) | 2023–Present | AI as discovery layer beyond Google | Retrieve, summarise, compare, recommend brands across the web | SEO expanded beyond ranking. Now it’s all about citations, extractability and authority building |
Google RankBrain (2015)
Google introduced RankBrain, a machine learning (AI) algorithm that helps understand search queries and deliver relevant results in 2015.
At the time, Google revealed that around 15% of daily searches were completely new. RankBrain was designed to interpret those unfamiliar searches by understanding patterns, relationships, and intent, rather than just matching keywords.
Before RankBrain, SEO leaned heavily on:
- Exact keyword matching
- Repetition of target phrases
After RankBrain, Google got better at understanding:
- Context
- Synonyms
- User intent (read about search intent for B2B here)
- The relationship between words
In simple terms, Google stopped looking for pages that matched words or keywords, and started looking for pages that matched meaning.
SEOs couldn’t just optimise for one keyword variation anymore. You had to understand:
- What the user was actually trying to achieve
- The broader reason behind a query
- The context surrounding a search
Example
If someone searched for: “how to cook tofu so it’s crispy”, Google would scan its index for pages containing the words:
“cook,” “tofu,” “crispy,”
That could lead to results like:
- A generic “What is tofu?” article
- A tofu nutrition page
- A recipe that happens to mention tofu and crispy in passing
- A page stuffed with the exact phrase but offering little practical advice
The words matched. The meaning didn’t always.
After RankBrain, Google became better at understanding intent rather than just terms. So if someone searched: “how to cook tofu so it’s crispy”
Google understands that the user likely wants:
- Techniques for pressing tofu
- Pan-frying tips
- Baking methods
- Ways to remove moisture
- Advice on texture
It can connect that query to pages about:
- “How to get crispy tofu”
- “Best way to press tofu”
- “Tofu cooking mistakes to avoid”
Even if the exact phrase “crispy” isn’t used.
And that change forced SEO to evolve from keyword placement to genuine intent alignment.
It was the first clear sign that SEO was moving away from mechanical optimisation and toward something more strategic: aligning with user intent at scale. And that evolution hasn’t stopped since.
Google BERT (2019)
Then came BERT in 2019, a natural language processing breakthrough that helped Google understand the meaning of words in context, especially in longer, conversational queries.
Basically, RankBrain helped Google understand new queries, whereas BERT helped Google understand context.
Here’s before and after:

This meant further evolution of the AI systems in search, helping Google to understand search intent with one goal: to share high-quality results.
Content couldn’t rely on keyword repetition anymore. Pages had to genuinely address intent and structure, and clarity became important.
For SEO, this was a subtle but critical shift that laid the groundwork for everything that followed.
Quality systems & helpful content updates (2018 -2022)
Between 2019 and 2022, Google’s AI systems matured. If RankBrain and BERT improved understanding, this wave of updates reassessed overall content quality, which determined the rankings.
As an example, the 2018 “Medic” update heavily impacted health and finance sites lacking strong authority signals.
For example, a generic blog offering financial advice without visible credentials could suddenly lose visibility to a regulated institution or recognised publisher.
Google’s Helpful Content Update, which launched in August 2022, focused on prioritising original and quality content over low-quality blogs. It evaluated websites as a whole to reward content that provided high expertise, while penalising sites with excessive, unoriginal, or irrelevant, unhelpful content.
During the same time period, page speed became a ranking factor, pushing businesses to prioritise performance and technical foundations. Featured snippets also became more prominent, rewarding well-structured, concise answers over buried information.
It wasn’t about keywords. Google began looking at:
- Author expertise
- Brand reputation
- External references
- Transparency
- Updated information
- Well-structured content
- User experience
This, if you think about it, sounds very similar to what AI summaries require today.
For the geeks out there, here is a detailed list of Google algorithm launches, updates, and refreshes that have rolled out over the years.
Google MUM (2021)
Then came Google MUM, the most powerful of them all.
MUM could understand language across 75 different languages, allowing it to develop a more comprehensive understanding of information than previous models.
It could also understand text and images, with plans, at that time, to move to video and audio.

MUM also allowed Google to analyse queries and how they relate, rewarding sites that build a comprehensive ‘topic clusters’ – a continuation from Google’s Medic and Helpful Content Updates. This meant that topical authority started to matter more than isolated pages. A single optimised blog post wasn’t enough. Google increasingly evaluated:
- Content depth across a topic
- Internal linking structure
- Consistency of coverage
- Domain-level authority
Google LaMDA (2022)
The last piece of the puzzle – conversation.
LaMDA or “Language Model for Dialogue Applications” could engage in a free-flowing conversation and be specific to context.
Essentially, this was the evolution of Google’s multi-modal models that understand information across text, images, audio and video. This was the beginning of AI Search as we know it today.

Search Engine Land predicted this could shift behaviour dramatically.
“Conversational dialogue between users and Google may enable them to search for information or products in ways that are currently impossible. If it works and is widely adopted (and that’s a big “if” at this point), we may see a shift in search behavior.”
And it did.
Generative AI (2023 -2025)
For the first time, AI wasn’t just about rankings; it was generating answers directly in search. For user queries, Google was able to understand a topic faster, uncover new insights, and get things done more easily.
The idea was to ask one question and provide an answer that is relevant and of high-quality. Beforehand, users had to break this one question down into smaller ones, sort through the vast information available, and piece things together themselves.
This marked a turning point:
- AI Overviews began replacing some informational clicks
- Search became conversational
- Summaries appeared above organic listings
By 2025, AI Overviews were appearing for nearly 16% of keywords, expanding into commercial territory. Here is an example.

Anna Morrish, Quibble’s Founder has covered covered this in detail in her Brighton SEO Talk >
LLM ecosystems (2023 – Present)
According to Microsoft’s AI Marketers Guide, LLMs learn by absorbing vast amounts of data, including large portions of the public internet across various inputs such as text, images, videos, audio and more.
Essentially, regardless of how you use it, be it a voice query or a query search, LLM’s are capable of understanding your intent and providing a relevant answer.
Here is an example from Gemini and surprisingly or NOT, I’ll be going to Pancake House. I don’t need to search organically now – I will book directly via the app.
Note how it differs from AI Overviews above. I’ll cover how they differ later on in this guide.

And this is the biggest shift. Search is no longer confined to Google.
Users are discovering brands via:
- ChatGPT
- Gemini
- Perplexity
- AI copilots embedded into browsers and apps
These systems don’t rank ten links. When surfacing results, AI systems typically base their responses that unfolds in stages:
- AI starts with what it already “knows” and learns over time. That baseline knowledge helps it identify what type of product or service you’re talking about and what usually matters in that category.
- Next, it cross-checks itself using authoritative web sources in real time. It pulls from trusted, indexed content to validate details, reinforce credibility, and understand how brands and products are described across third-party references.
- Finally, it sharpens the answer using structured first-party data (like product feeds). That’s what gives it the precise, up-to-date details needed for comparisons and recommendations such as price, availability, and key specs.
This expands SEO beyond traditional rankings.
It becomes about:
- Being referenced
- Being trusted
- Being structurally clear
- Being contextually strong
That’s why McKinsey calls AI ‘the new front door’.
Ranking vs appearing in AI
Traditional SEO
At its core, according to Google’s SEO Starter Guide, strong foundational SEO revolves around four key areas:
Crawlability & Technical Access
Search engines need to discover and access your pages.
This includes:
- Clear site structure
- Logical internal linking
- Mobile-friendly design
- Fast-loading pages
If Google can’t crawl it properly, it can’t rank it.
Learn more about crawling and indexing >
Learn more about internal linking here >
Clear Page-Level Signals
Each page should clearly communicate what it’s about.
That means:
- Descriptive title tags
- Structured headings (H1, H2, etc.)
- Relevant, focused content
- Alt text for images
Traditional SEO has always relied on these signals to help Google understand topic relevance. Learn more about on-page SEO >
Helpful, People-First Content
Google’s guidance consistently emphasises creating content for users, not search engines.
This includes:
- Answering real questions
- Providing original value
- Avoiding keyword stuffing
- Demonstrating expertise
- Keeping content accurate and up to date
Ranking has increasingly depended on quality and usefulness, not just optimisation. Here’s Google’s guide to creating helpful, reliable and people-first content >
Authority & Trust
Google evaluates whether a site is trustworthy and credible.
This is influenced by:
- Backlinks from relevant websites
- Brand reputation
- Clear authorship
- Transparent business information
- Strong overall site quality
Traditional SEO has long rewarded sites that are authoritative within their topic area. This is where Google EAAT (experience, expertise, authoritativeness, and trustworthiness) comes into play.
After identifying relevant content, Google systems ‘prioritise those that seem most helpful.’ This is when they look at your site to determine whether you provide the relevant information based on your experience and expertise in the industry.
Find out more about Google EEAT guidelines here >
AI SEO
In terms of AI visibility, AI systems don’t just rank pages. They summarise and decide which sources are credible enough to reference. As a result, the aim is to:
- Be cited in AI summaries
- Be mentioned in LLM responses
- Be referenced as a trusted source
- Be associated with a topic across multiple signals
The Semrush 2025 AI Visibility Index Study shows that LLMs often cite high-authority domains and well-established publishers.
That might sound discouraging for SMEs, but it shouldn’t be.
AI rewards topical authority. If your business consistently publishes detailed, structured, accurate information around a clearly defined niche, and that expertise is reinforced across your website and wider digital presence, you become more “cited”.
What “Being Cited” Looks Like in Practice
Ecommerce & Retail
If you run an e-commerce business, AI visibility doesn’t just come from your product descriptions. According to Semrush 2025 AI Visibility Index Study, AI systems often pull from:
- Google reviews
- Product schema
- Comparison content
- Editorial roundups
- Shopping feeds
- Community discussions
For example, if someone asks: “Best vegan school shoes in the UK”
An LLM might reference:
- A well-optimised category page
- A product with consistent 4.8-star Google reviews
- A third-party comparison blog
- Reddit threads and community/social forums discussing this

We’ve seen this with ecommerce clients where strengthening product schema, improving review visibility, and tightening category page clarity directly influenced AI inclusion, even when rankings didn’t dramatically change.
AI doesn’t just read your site. It reads what the internet says about you.
Financial & Regulated Industries
In finance or regulated sectors, authority signals are even more critical.
If someone asks: “Best mortgage brokers for first-time buyers”
AI systems are more likely to cite:
- Firms with visible credentials
- Clear regulatory information

We’ve worked with financial clients where simply adding author bios, displaying qualifications clearly, including “last updated” markers and strengthening internal topic clusters made their content more structured and AI-friendly.
In regulated industries, trust signals are not optional.
B2B & Professional Services
For B2B businesses, AI visibility often comes from thought leadership and niche authority.
If someone asks: “Top sustainable packaging suppliers for ecommerce brands”
AI systems might pull from:
- Industry directories
- LinkedIn content
- Trade publications
- Case studies
- Structured service pages

We’ve seen clients gain visibility not because they “optimised for AI”, but because they consistently covered their product or service in depth, and that depth created extractable authority.
AI, structural clarity & schema
AI visibility is about extractability. Large language models don’t browse websites the way humans do. They don’t admire your design. They want structure.
This means the way your content is organised now directly impacts whether it can be cited, summarised, or referenced.
And this is where structural clarity and schema move from “nice to have” to foundational.
AI systems need to confidently extract:
- What your page is about
- Who it’s for
- What question it answers
- Whether it’s trustworthy
If your key insight is buried halfway down a long block of text with no hierarchy, it’s harder for AI systems to isolate it.
Structured, clearly segmented content performs better because it creates extraction points.
That means:
- Clear H2 and H3 headings
- Question-led subheadings
- Concise explanatory paragraphs
- Bullet points where helpful
- Logical flow
And let’s not forget about schema.
Schema (structured data) is essentially a language layer that tells search engines and AI systems exactly what your content represents.
For example:
- Product Schema – identifies pricing, availability, brand, reviews
- Review Schema – separates user-generated reviews from marketing copy
- FAQ Schema – clarifies question-answer pairs
- Article Schema – identifies author, publication date, headline
- Organisation Schema – defines business details
AI systems don’t rely solely on schema, but structured data helps machines connect the dots.
How AI Search is changing traditional SEO KPIs?
What are the traditional SEO KPIs?
For years, SEO success was fairly straightforward to track. Traditional SEO reporting focused on:
- Keyword rankings- the position of your pages on Google
- Organic traffic – the number of visitors arriving on your site from search engines
- CTR – the percentage of people that see your brand on Google and click on it
- Conversions, leads, sales, webinar signups etc
If those numbers were climbing, you were confident things were moving in the right direction. Now, AI platforms are changing how people find information, and that means the KPIs we’ve relied on for years don’t tell the full story anymore.
I’ve seen it firsthand. When users get answers directly in AI Overviews or start their journey in tools like ChatGPT or Perplexity, traditional metrics don’t always reflect the full impact of your visibility. Traffic might dip, yet commercial performance holds steady.
Why are traditional SEO KPIs not relevant anymore?
One of our clients recently experienced what looked like a worrying drop in organic traffic.
At first glance, it felt dramatic. But when I dug deeper, the decline was almost entirely within informational queries. The kind of “what is…”, “how does…”, and “why does…” searches that now frequently trigger AI Overviews.
Those clicks hadn’t vanished into thin air. They’d been absorbed by AI summaries. And here’s the key part: commercial traffic and lead volume remained stable.
So while sessions were down on paper, performance wasn’t. The visibility shift didn’t damage revenue, it simply changed where the answer was being consumed.
To give this case study context, according to a study, even in 2024, around 60% of searches in the US ended with no click. With the appearance of AI Overviews that figure jumps up to 83%, and in Google’s AI Mode, 93% of sessions end without a website visit. That’s exactly why reporting needs context now. Not all traffic loss represents business loss.
What KPIs should I be tracking for AI Search?
You now need to look at a broader view of visibility, one that includes where and how your brand is being surfaced across AI platforms, how informational intent is evolving, and whether your presence is influencing meaningful actions, not just clicks. You need to look at:
- AI referral traffic (ChatGPT, Perplexity, Copilot, Gemini) – similar to how we track organic traffic in GA4 – please see below steps to get you started.
- Citation frequency in AI summaries – When an AI platform does cite a source, is it citing yours?
- Branded search growth – Is your brand being mentioned when someone asks AI a question related to your industry?
- Brand sentiment – It’s not only how many times your brand has been mentioned, but in what way? Is it positive?
Visibility now operates on multiple layers. A business may rank highly in organic results yet be absent from AI summaries. Alternatively, it may be cited in AI responses despite modest ranking positions.
Why is AI prompt tracking difficult?
Mentions can shift based on model updates, prompt phrasing, or data refresh cycles. This dynamic nature mirrors the volatility SEO professionals have always managed through algorithm updates, device differences, and geographic variations.
Moreover, users are moving from simple keywords to an average of 23 words, with some reaching up to 2,717 words. Also, a tiny, minor change in a prompt (like an extra space or punctuation) can cause a significant shift in the AI’s output.
Monitoring must therefore extend beyond organic traffic and rankings alone. Evaluating AI visibility requires assessing citation patterns, brand mentions, and extractability. It requires understanding whether your expertise is being surfaced where users are now beginning their research journeys.
Here is a free course on AI Visibility from Semrush >
What free tools can I use to track AI Search visibility?
Google Analytics 4
To get you started, here is how to check whether your site is getting traffic from any AI platforms.
Go to Reports > Acquisition > Traffic Acquisition.
Change the primary dimension to “Session source/medium” and use the search bar to filter for keywords like “chat”, “perplex”, “gemini”, or “copilot”.

By tracking it properly, you can figure out which AI platforms are sending visitors to your site. Start reporting to see which pieces of content earn you the most engagement.
When you hear about tracking traffic, referrals, clicks, engagement, it’s easy to think: this is just SEO.
So is AI SEO simply SEO with a new label? Or is it something fundamentally different?
ChatGPT, Perplexity, and Google AI
Run regular AI audits across LLMS to monitor brand mentions and citations. Simply ask a question – What do you know about my brand? Here is an example below.
If you’re an ecommerce brand and you spot negative reviews about your product, this gives you a framework for improvement.

Google Search Console
It’s free and in my opinion underused. Although it won’t provide the visibility from Chat GPT and other LLM’s, you can track your performance in AI Overviews and Google’s AI Mode.
Well, it doesn’t separate that traffic from organic traffic coming from the same SERP but it will give you a good idea.
You can still analyse keyword and URL combinations, spot shifts in impressions, identify queries where visibility has increased but clicks haven’t followed, and layer that insight with your rank tracking data to build a clearer picture.

In this example, this looks like a decline as clicks and impressions are down, compared to the previous six months.
But if we look closer, the average position has improved significantly, and CTR has remained stable.
In an AI context, this often signals a behavioural shift rather than a performance issue. The site is ranking better, but fewer users are clicking, likely because informational queries are being absorbed by AI Overviews.
In other words, visibility hasn’t disappeared. The way users consume it has changed. The key question isn’t “Did traffic drop?” It’s “Did meaningful traffic drop?”.
Brand mention monitoring
If you’re thinking about AI visibility, don’t just look at your website. AI systems don’t only read your site, they frequently pull from:
- Community forums
- Review platforms
- Comparison blogs
- Industry directories
- Third-party discussions
That means your presence (or absence) in conversations matters.
How to monitor this for free:
- Google Alerts – set up alerts for:
Your brand name
Your brand + key product/service
“Best [your category]”
“Top [your industry] companies”
This helps you spot new mentions, reviews, or comparison content as it appears.
- Manual Reddit & Forum Searches – search directly in Google, example below.
site:reddit.com “your brand”
site:reddit.com “your product category”

- Review platforms & directories – check your presence and consistency across Google Business Profile, Trustpilot, industry directories and comparison sites.
AI visibility vs SEO fundamentals – the misconceptions
AI SEO relies on the same core ingredients that have always driven strong organic performance such as technical optimisation, authority, clarity, relevance, and trust. But the outcome it optimises for is different. Traditional SEO focuses on earning a position in a ranked list. AI SEO focuses on earning mentions within a generated answer.
Essentially, AI visibility depends on SEO fundamentals executed well.
Crawlability remains essential. If search systems cannot access and index your content properly, there is nothing to summarise.
Internal linking reinforces topical relationships and strengthens authority clusters.
Page speed and mobile optimisation contribute to overall site quality. Clear on-page signals help define intent. Helpful, people-first content remains central to Google’s ranking systems.
Lily Ray, Vice President, SEO Strategy and Research at Amsive said that “ Many new AI tactics are an extension of established SEO methods that we’ve been using for years. The foundational practices of SEO, such as technical optimization, content clarity, and reputation management are critical for both traditional and AI-driven search.”
The strategic reality for SMEs
For SMEs, the takeaway is not to chase every AI headline or emerging feature. It is to strengthen clarity, credibility, and structure.
AI systems reward businesses that:
- Demonstrate consistent topical authority
- Organise content logically
- Reinforce expertise visibly
- Maintain strong cross-web reputation signals
- Support technical SEO fundamentals
At Quibble, we assess AI visibility alongside foundational SEO health. We examine technical crawlability, structured data implementation, content clarity, EEAT signals, and reputation footprint. We identify gaps not only in rankings, but in extractability and citation potential.
If you would like to understand how your business currently performs across traditional search and AI-driven visibility, we can evaluate your position and outline a clear, practical roadmap. If you need any help, give us a shout. Email us at hello@quibblecontent.co.uk

