SEO Veteran Debates AI Search Founder on What Gets You Into LLMs
Chapters24
The chapter argues that AEO (answer engine optimization) is fundamentally different from traditional SEO due to query fan-out, where a single query splits into multiple subqueries, leading to answers assembled from sources rather than a single surface query. It warns against forcing AEO into SEO frameworks and emphasizes the need to consider subqueries and the broader content strategy required to support those fan-out paths.
AI search isn’t the same as traditional SEO; success hinges on shaping AI’s understanding of your category through attribution, research-driven content, and clear positioning—not just chasing rankings.
Summary
Edward Sturm hosts a provocative chat with Tom Brady (Demand Genius) and David Quaid about why AEO (AI-enabled optimization) is fundamentally different from classic SEO. Brady argues that AI models rarely search for answers in the way humans expect, instead performing retrieval only in a minority of prompts and often returning content based on ingrained knowledge and authority signals. The trio explores the three-stage buyer journey (awareness, consideration, conversion) to show how AI prompts behave differently at each stage, and why conversion prompts still drive retrieval while awareness/consideration often do not. They discuss the dangers of “compact keywords” and the risks of trying to game the fan-out of queries by cranking out massive volumes of content. A core alternative is Information Gain: creating original research, interpretive insights, and clear methodology that moves understanding forward and embeds your category into AI reasoning. The conversation covers practical steps to audit and improve AI visibility, including positioning, content quality, reputation, and the role of markdown-style knowledge sharing to increase context for AI. They also debate how different LLMs (ChatGPT, Claude, Perplexity, Gemini) behave, how retrieval is invoked, and how brands should think about strategy in a fast-evolving AI landscape. The episode closes with concrete advice for startups and established brands on starting AI visibility: treat AEO as a discipline, not a one-off project, and focus on proving your value through deep, citable research and a precise narrative that resonates with buyers, CFOs, CMOs, and CTOs alike.
Key Takeaways
- AI search results are not built by ranking a single query; AI behavior often hinges on a small set of prompts and how the model handles retrieval across the buyer journey.
- Awareness and consideration prompts rarely trigger retrieval, while conversion prompts do so only around 16% of the time, meaning you must influence AI’s category understanding beyond surface keywords.
- Three pillars matter most for AI visibility: positioning (who you are), content (what you publish), and reputation (third-party signals); misapplying traditional SEO tactics to AEO can backfire.
- Compact keywords—creating dozens of pages that “sell” to buyers—are less effective than content that provides high information gain and moves knowledge forward through original research and data.
- Information Gain is the backbone of credible AI responses: aim for interpretive, empirical, and conceptual gains to become a trusted source that AI cites and humans value.
- Retrieval-based visibility can be improved quickly, but long-term influence requires changing the AI’s mental model of your category through rigorous, transparent methodology and ongoing research.
- Brand signals from non-SEO sources (Reddit, G2 reviews, Forbes coverage) substantially shape AI perception and should be integrated into a broad AEO strategy.
Who Is This For?
This episode is essential for growth-focused marketers and product leaders who want to navigate AI-driven search. It’s especially valuable for SaaS/B2B brands expanding into AI-assisted discovery and for CMOs, VPs of Marketing, and Heads of Growth who want a plan beyond traditional SEO tactics.
Notable Quotes
""AEO is a very different challenge to SEO. They are incorrectly bundled together.""
—Tom Brady explains the foundational difference between AEO and SEO.
""AI does not really search. In awareness... 0% of responses invoked retrieval.""
—Brady highlights the stages where AI retrieval is triggered.
""Compact Keywords focuses on putting up dozens of pages that sell to searchers who are actually looking to buy.""
—Discussion of a strategy Brady uses/advocates.
""Information gain... net new knowledge that you didn't have before.""
—Defining the core metric for credible AI content.
""Treat AEO as a discipline, not a project.""
—Brady’s guidance on approaching AI optimization.
Questions This Video Answers
- What is the difference between SEO and AEO in practice for 2024 and beyond?
- How does the query fan-out affect AI-generated search results and content strategy?
- What is Information Gain in AI content, and how can brands measure it?
- Why should companies focus on positioning, content, and reputation for AI visibility rather than just optimization?
- How can startups start building AI visibility for new brands without existing authority?
AI SearchAEO vs SEOQuery Fan-OutAwareness Consideration ConversionInformation GainCompact KeywordsLLMs (ChatGPT, Claude, Gemini, Perplexity)Content StrategyBrand AuthorityMarkdown for AI Context
Full Transcript
you shared on LinkedIn, Tom, you said, "You will hear a lot of people tell you that AEO, answer engine optimization, um, is the same as search engine optimization." But as a great man once said, false. They are fundamentally different things and the gap is widening. This is due to a mechanism called the query fan out. When you type a single query, it gets decomposed into multiple subqueries. Each of those surfaces entirely different sources. And the final answer is assembled from all those pieces. Which means Google never searches for an answer to the query that you are tracking and producing content for.
And here's the thing, most AEO strategy attempts to ram a new challenge into existing SEO frameworks. Find a query you want to rank for, write a piece of content for it, measure if it gets mentioned. But even if a user enters that exact query, Google isn't going to search for an answer to it. It's going to do research and compile an answer. You would need to rank for those sub queries, not the surface level one. The fan out queries are system generated and usually are not queries with any search traffic. And even if you could predict them, just think how much content you would need to produce to hit them all.
So on this episode of the show, we are talking about search engine optimization versus generative engine optimization. And um I guess before we start, Tom, so you you put that on LinkedIn. Could you introduce yourself? And then David, David, you've been on the show before, but for any new listeners, I'm going to ask you to introduce yourself as well. Sure. So, yeah, Tom Brady. I'm the founder and chief exec of company called Demand Genius for an AI startup um out of London um who spent the last two two years or so in the kind of content marketing, AO, GO, whatever your your acronym of choice is world.
And I'm I'm David Quaid, who's just regular SEO, plain old vanilla SEO. David, you've been David's been on the show before. David's been doing SEO for two decades, and you do a lot of SEO for SAS. Um, yeah, and you are you're awesome. You're both awesome. So, thank you. Uh, so Tom, could you talk about um how uh how AEO is fundamentally different? I think you wrote wrote that to me over email. You said AEO is a very different challenge to SEO. They are incorrectly bundled together. AI very rarely actually searches for answers. Yeah. And let me start off maybe by framing it in um I think there's overlap and like like between any two branches of marketing, right?
There are certain things which are going to help you across the board. If you have high authority, if you have a strong brand that helps you in absolutely everything you do. So there is of course some elements and and I would also say the two things are compatible. Like one thing I'm always keen I'm not saying rip up your SEO strategy. SEO is still a fantastic channel. It just is a different channel with different mechanisms. Um so I see that like there I sometimes feel like I get framed as if I'm attacking SEO and that that couldn't be further from the truth.
Um but what I do think is really important is to approach the problem of AOE as a fresh problem. And I think quite naturally it's been a problem that's been picked up by SEO leaders, SEO vendors, SEO agencies. um and they've approached it through the framework of SEO and how can we create content that ranks in this new thing and that just isn't really how AI models work. So I think there's two questions here. One is like how should we approach it and then how do they work under the hood and that's the most fundamental finding.
So we did a lot of um we do a lot of original research. I think it I kind of think it's a really important thing if you're going to operate in our space as we try and understand this together. And we did this big um study where we ran a whole load of prompts in awareness, consideration, conversion stage of the buyer journey. Um and what we got back was really really interesting data as to how the responses vary at each point. And what it showed very clearly was AI does not really search. Um so in awareness in awareness stage sorry 0% of responses invoked retrieval incited any brand.
Same was true in consideration. Only in conversion stage did it ever go looking for answers. did it ever search kind of in inverted commas for those who can't see my little hand diagram there. Um, and that's really interesting. Um, and even then it was only 48% of the time. So that of all of the prompts that we ran, 16% there was a search behavior. So that's the first level. It very very rarely invokes retrieval. And then as we explained in the post, which I kind of um was a little bit more antagonistic than than I needed to be cuz I wanted to to shoehorn my my Dwight Shroo quoting um from from the office.
But uh even when it does go to retrieval, when in that 16% where it does go searching, it doesn't search for what you are putting in. So all of your the idea that you can optimize for that original prompt is is in my opinion very heavily flawed. Um what just quick question um the the three stages um you know that that you defined um how does it how do you define those uh three stages um before the last one? So like awareness consideration um those are marketing terms right not necessarily LLM protocols or how rag works.
So how would you define those prompts? Yeah well what we were trying to mimic the behavior of users across complex bio journeys. Right. So is one of my my I guess kind of uh things that didn't feel right about a lot of the AEO advice was it treats every prompt as an isolated event whereas actually it carries a lot of context with it and it's part of a broader buyer journey. So often we look for like a lot of AO measurement focuses on the conversion prompts because that's where citations happen. So that's where we can prove impact.
Um we wanted to try and simulate how a buyer prompts it at every stage of their journey from problem aware through to solution aware. It wasn't we weren't trying to emulate any kind of LLM behavior there. We were trying to emulate the user behavior. Prompt the LLM in a way a user would at every stage of a complex buyer journey. Um and then and that's why we split it up into those three broad categories of awareness prompts, consideration prompts, and then conversion prompts. So what was different about an awareness prompt that didn't um result in a search?
And and what LLM did you test us on as well? That would be really interesting. Yeah, so we were doing this in chat GPT. Um, so that would be one actually very interesting thing that we want to do as a follow-up and and everything we published all of the methodology out there. So by all means go and go and kind of reproduce it. Um, but so I think it's fair to assume if that they all are the same basic technology. So if it was true in GPT, it's probably going to be true to at least some degree in the other ones, but that that's an interesting question.
Um, how and the other question was what varied between the different prompts? we would try to reflect like so awareness you are asking much more open problem oriented questions and then as you go through the funnel it becomes a lot more comparative and a lot more decision oriented and the other thing that was actually really interesting was the extent to which the L&M's mirror back the language of the prompt so there's a thing that happens called intent matching um so as you go down the funnel the responses become a lot more decisional um and I think it's very tempting to think that that's because it really likes your brand but actually what we kind of saw is that's because it's really smart at helping the user and it realizes as you give it a decisionoriented prompt.
You want help coming to a decision. So it gets a lot more firm. It goes looking for answers and then it gets and it gives you answers rather than exploration and comparison which I found super interesting. This method of marketing is so effective I had to make sure it wasn't against Google's rules before I kept doing it. It's a form of SEO I call compact keywords. Whereas most SEO focuses on putting up articles to answer questions, how, what, when, compact keywords focuses on putting up dozens of pages that sell to searchers who are actually looking to buy.
These pages rank on Google and convert so much better than normal that when I discovered this years ago, I couldn't believe this was allowed. It's less work, too. The average compact keywords page is only 415 words. Compact Keywords is a 13-hour deep course on getting sales with SEO. A customer recently said, "Each lesson is dense with information. You're giving years worth of experience boiled down into 15 to 30 minute lessons with no filler or fluff. I feel like I'm gaining a new superpower. Compact Keywords is about setting up an SEO funnel that brings you sales for years and years and years.
It works with AI. It's less work than traditional SEO and it makes way more money. You can get it now at compactkeywords.com. Back to the podcast. I'm just trying to figure out why um it was only at the um conversion stage that it went to do a search. Um and and by that I I expect you mean that the query fan out only happened then. So were were you doing all of these prompts from the same user login? Were you Yeah. So we were we were running these via the API um rather than in the UI um which always get slightly different behavior but it's very difficult to replicate the exact behavior you get in in the UI.
Um but yeah I mean I couldn't answer why they were invoked differently but it was a very clear trend across thousands of prompts like and and the fact that it was zero it was pretty pretty stark and we did this across lots of different categories as well. So we we picked four B2B categories with varying degrees of complexity. So, some that are a little bit more commoditized um because obviously for some categories the bulk of the buyer journey is in the conversion stage, right? If you're buying toothpaste, it's all conversion. It's the offer at the point of conversion.
Um but the more complex the category, the more bulk lives in awareness and consideration, which is where um we're trying to understand the behavior in those stages better. So, so would an awareness um that category would that be something like um how do I brush my teeth or or do I need toothpaste or would it be even higher like how do I protect my teeth? Is is that Yeah. Yeah. So, in the in the world of toothpaste um and that's why I say there isn't much awareness in toothpaste, right? It's I'm out of toothpaste. I buy new toothpaste.
Like I hope that most adults at this stage know that they should be brushing their teeth. There's not like education that needs to go on there. But I I like to use this this a kind of way of thinking about it that I have which is you can think of like take a CFO at a at a big company. There's like three purchases they might buy and they're going to have varying degrees of complexity, right? So toothpaste, I'm out of toothpaste. What's the cheapest toothpaste? It's basically there's very little else at play there. Um trainers, my trainers are a bit old and I'm going to buy new trainers.
There's a lot of decision- making that actually happens along the way, right? like do I have back pain? Am I a runner? Or am I a climber? Am I a football player? There's all of these things that are going to go into that. And that's the awareness consideration. So the awareness and consideration. So awareness is how do I even know that my trainers there's a problem with them? Um well maybe you start to realize I had foot injuries and then I realized that they had all got they weren't as protective anymore. And then consideration is okay well which one would be protective?
You then go to the extreme end of it with a billing platform and you've got 13 different stakeholders all at varying degrees of awareness that all of their problems come back to the billing platform. There's a huge amount of it that lives in awareness and consideration and very little that lives in conversion. So my argument is the more complex the category, the less AEO is like SEO because it's not optimizing visibility at the point of conversion. it's actually influencing how the LLM understands the category and how it shapes requirements on behalf of the buyer. So I I would I would say look yeah toothpaste I mean it's it's how much how much training does an a an LLM need on on toothpaste but you know I work a lot in SAS and actually billing is one of my my favorite areas uh so like meed billing AI me kind of stuff like I really love that I'm really deep in there so I imagine if I was going to start I would imagine I would already know I need AI billing right like I imagine if I was going to go to an LLM I wouldn't ask it to explain it to me or maybe I would right?
So maybe I would say um what is the AI billing? What what options are there or how would you evaluate the AI billing? I I would imagine something like that would require a a search whereas like toothpaste I I can see that not requiring a search, right? But I I would think if I anything more complicated than that, especially in B2B or SAS, I would I would imagine that the the searches would start pretty immediately or, you know, higher up like um Yeah. Well, and that's kind of my point, I suppose, is that yeah, that there is an awful lot more awareness and consideration that goes into the purchase, right?
Like take take I don't know, take this podcast. It would take a long time for Edward for you to become aware that actually the script isn't the right podcast platform for you and then there's understanding why and what AI has changed in the podcast platform world that means that actually there's a better option out there now at no point in that journey are you going and saying asking it for a specific recommendation. Sure. Sure. Um and they conversations in which citations don't really happen because I I often joke with Edward and I say you know the prompt is not the query.
I actually have like a little um perplexity cartoon with um Paul Trades standing on top of it San June thinking to himself the prompt is not the query right so I think that's where a lot of people get confused especially in Gemini right so Gemini because it's integrated with Google search and I'm I guess I should have said AIO's right more than Gemini but if you go to normal Google search bar and let's say you type in a search for let's just stay with best SAS billing platforms right um the results that you get below the AIO are all the regular, you know, that's the Google index for that uh query.
But and a lot of people um especially on Reddit or or or um X will say, "Oh, I I I can't figure out why I'm not visible, right?" And so we see a lot of um you know, AEO um experts, geo experts saying that Geo is very different to search, right? it it it the um LLM's build trust model is something that I I find very difficult to understand but I definitely understand that um the prompt is not the query and I try to say that to people and and I I think the best LLM for for using that is perplexity uh chatpt often hides the fact that it's searching I think or it relies a lot more on caching um and I think perplexity and claude especially are more upfront about it and the fact that you can see the QFO by looking at the stages with chatbt and Gemini you have to look behind the scenes right so whenever I dabble around with um you know ranking in Google and then I want to go and see if I'm visible I find that I can get into gro perplexity much faster Gemini slower and then chat the slowest chatd like seems to only refresh the update the results of its query fan outs much slower um I don't know if that's because like it cost them more money cuz like they have a lot more demand than say perplexity or um uh Claude which I think still uses Brave search right it stops saying it uses Brave search but I can't find any updated information so um once once I find out what that query fan out is I think it's relatively easy to sort of like stay ranking in that and it does differ right like you you go from place to place and also with the different the makeup of the different result pages depending on what's in the query fan art because a query fan art could be what like three, six, nine different searches depending on um and are we talking here about cuz um cuz there's query fan out partly which is like the LLM the explanability of the LLM right and you can see Claude goes through it and it's like doing this doing this doing this but even for AI overviews query fan out takes place right so it it it goes and it it it searches I think my understanding because it's typically around 20 or like it can be as many as 20 different things to fill its knowledge gaps that allow it to compile an answer which is where like so have you found in your work that you can predict what those 20 queries are um cuz I I think it's definitely going to be valuable if you are ranking for those but I think it's it's so hard to predict that it becomes an unhelpful framework for for AEO to try and do that.
I think there were there were simpler groups but yeah so um the the number of queries tends to be three right so the average query fan art is three it can also be one right like I could just say like give me a list of the top SEO agencies in New York and it'll the there'll be one query top agencies in New York or it might swap top for best which in Google is is very synonymous anyway right so you rank for top you rank for best it's pretty good right um it'll also you consultant, expert, agency when you combine with a word like SEO or PPC tends to be very similar.
Um, I've only noticed that it the so if you if you if it queries out to three, which is the most common I've seen, it tends to get like 25 to 30 results back. And if you in some cases um especially where I guess um there's less agreement in the initial documents that come back it can actually go up to six queries which will net you maybe like 50 results back. Um obviously a lot of documents will rank in the same subset. So it depends on how much the drift is right. So um so if you take for example SAS billing it'll do and I think the two the the the grounding queries that come out after the first one they're based on the results of the first set.
It actually does it quite quickly cuz the way it types on the screen you know it looks like it's still thinking but it's actually already done that you know because it's already gotten to the end of the document before it types it. So it kind of does that like sort of like delayed feedback which I imagine slows the user down saves on tokens. But um I find it once you know what the what the drift range is, it's quite predictable. So if you if you start with something like um evaluate or who are the top players in the SAS billing market uh depending on basically things that will impact that maybe your own history like have you asked that question before or does it know that you work at a a SAS billing platform might uh or if you've asked questions about different vendors and then depending on what the top results come back with it might go and do something like add a year for example and so I I see a lot of people um I don't know if you've noticed on LinkedIn for example, people talk about freshness a lot and I found the ARF's report that came out with um very interesting when they said the average document was actually 500 days old.
Uh I found that correlated very well when I went to look at um AI's Bing performance report or sorry yeah Bing's AI performance report sorry uh where I see 2024 outweigh 2026 a lot. So, I always assume that those years are there to throw SEOs off or or to like undermine SEO depending on on which um LLM you're working with. Um I don't know if you if you've noticed that much at all or not Could you repeat could you repeat the question? So, I'm I'm Yeah. So, what what I'm trying to do is is is understand um when the LLM does the search, right?
and how the query fan works and then the predictability and reliability of appearing for those searches. I think it's it's pretty predictable. I think where it gets kind of difficult is where you for example get multiple listicles back and it's trying to compile a list of top five, top 10, then it becomes very unpredictable. But for the most part, I find that um the visibility in Google search and the visibility in AI seems to be pretty consistent, right? Like um at the end of the day, it's Google or Google Clone running the query whether the query is the first, the second, or the third, right?
It's not the prompt as you said at the start. It's um you you're you're in in SEO you're you're usually optimizing for one query. Now you're optimizing for one query plus potentially drifts, right? And I find as long as you get in for the first query, you're pretty much set, right? Do you see something different with that with the other? No. So I do we do see that a relative amount of consistency. Um but I think to me that goes back to the like at the end of the day a lot of this does. What s what SEO or search and AI do have in common is that they index very heavily on authority.
And we know that AI in particular treats that as a huge I tend to think you've got three ways to to influence an LLM. You have your positioning, your content, and your reputation. So positioning is who you say you are. Content is what you consistently put out there that kind of reinforces that. And then reputation is what other people say you are. Um and how people validate that. Um and one thing we know is that at the moment reputation is very heavily indexed. So that's the kind of brand authority and things like that. Um so we see it in in that extent and that would be my hypothesis as to what creates that similarity.
It's the same as saying like okay word of mouth and search is going to have similar visibility because they are built on the same brand foundation right what gets you word of mouth what gets you search visibility what gets you AI visibility is ultimately are you good and do you communicate that and do people agree um so that's what I would credit the kind of uh the the the the relative convergence of those two things to um but it's interesting and what I think what I haven't come across before was I'd be interested to dig more into the explanability of the query fan out process cuz my general understanding and you would understand like search a lot better than I would.
My general understanding is that that is a bit of a black box. Um I think the next the other thing is coming on to like what's a helpful approach to optimization but that's interesting and I'd love to see some see like the data around that. Yeah. So you you another um topic I see brought up quite a lot that I think is confusing for brands and users is that um LLMs have their own indexes whereas I would say they don't right like they building a search index I think Google's done a terrific job of making that look simple um Bing have done a terrible job of replicating them right that's why I think they've struggled um sort of like the old Microsoft versus Apple problem right where where Google's this massive monopoly but I don't like especially Perplexi doesn't have its own index.
It's it's a wrapper of an LLM. Gemini obviously uses Google's index. Um ChachiPT seems to use Google's index especially if you look at maps, right? It's it's it's it's not even a secret. It's like the the URLs are straight from Google. It hasn't found a way of masking it. So when you say the the the two rely on brand trust um in search, I I think that's a difficult um problem, right? because I I I think search is built for so much more than just like marketing. It's it's built for, you know, facts and information and discovery and news and and a whole lot of other things.
And so authority in Google is really like it's it's very simple. It's it's it's it's a number and a topic, right? And it really just comes from who links to you and who they are and what value that carries and then how you've performed for that search phrase in that search index. Um I for example I know Grock pretends that it has its own search index but uh I can publish something today like for example I I often love playing with like who are the top SEO experts or the top geoex experts or the top eat experts and I just do that because it's easy right like I I have a lot of topical authority in Google for SEO right so I'm not going to go and do travel or something like that because it's just going to take forever to catch up so um I often like throwing people that are you know friends of mine on X or friends of mine and read and putting their names into the list just just to watch it change.
Right. Uh I've thrown I've thrown Edward in there a few times. I'm about to ask. Yeah. What does it what does it say about Edward? Come on. Uh and so um what's interesting is that the the um the results that Gro uses exactly match Google's like it's one to one the same as Perplexity does. And um Chat GPT is very close. Um, I mean when I say very close, it's 100%. Right. Um, I think that's Sorry. Oh, sorry. Sorry. My question to you was, do you do you think that chat GBT uses Google or it has its own index?
I couldn't I I wouldn't be qualified to answer which index it is. Like I I think you have to be a ChatgBT in order to to do that. But I think the challenge is like we're still as long as we're thinking about what index it uses, we're looking at a tiny microcosm of the prompts that influence um influence what brand someone ends up choosing or what product someone ends up choosing, right? Um and I think that's where I'm like so most of the overlap between AEO and SEO sits in that 16% of searches where AI actually searches.
Um I don't I I couldn't tell you what index they're using to search. I don't think it's hugely important because I think my argument is that the other 84% is actually where influence happens and actually where AEO strategy needs to focus. Um I think the challenge is there are I think AEO and SEO are very very compatible as two different things but taking an SEO like strategy for AEO I think can be really really damaging um for two reasons. One I think it encourages you to look at all the wrong metrics out of it. who it encourages you to talk about ranking.
It encourages you to talk about traffic. Um, those two things aren't really AI search concepts or AI concepts, right? Like there isn't a concept of a ranking within an AI response. It's it's you could be the first one mentioned, but if it says you're rubbish, then that that's not that's not a lot of value. And it very rarely links to you, right? So, if we say that 16% of prompts ever site a brand, what percentage of them are then clicked to? That's one of the narratives I always struggle with when people are like, "Oh, look at the um proportion of traffic that comes from Google versus AI." Um it's like, well, yeah, Google is designed to link to you.
So, search is a directory. AI is not. It's like an active participant in the market. It's a thinking being that talks to your buyer as a consultant and you need to influence it, not just be visible in it. Um so, I think that's where like as long as everything is a question of visibility, I think it's very difficult to do AEO. Well, the other challenge you have is what we do see work really well for AO is where you are able to communicate who you are, who you're for, what you're good at, what the trade-offs are of your product with a lot of consistency.
And the way I think of it is those three pillars, right? Content, positioning, reputation. The closer those three things are to each other, the better because you're creating no room for interpretation. You're really clear over those things. Now, the challenge is when you start trying to optimize for specific prompts in the way that you would optimize for keywords, because of all of this fanout process that we talked about, you end up creating so much content. Um, and we're seeing a lot of people or a lot of the I guess the kind of what I would call black hat AO type tactics that I see is where people think, okay, I need to create loads and loads of content.
Well, here's AI. I can create loads and loads of content and I can repurpose this one article to cover the 20 query fan outs and the 22 and a half thousand different variations of prompts that exist for every keyword. And what you end up with is this massive library. And we talk a lot about like content debt, which is basically the gap between who you are today and what some of your content says you are because it's all outdated. Currency, quality, consistency, clarity. um the more content you produce, the harder it is to provide clarity as to who you are and to maintain that.
And that's where like playing the whack-a-ole game with trying to rank for all of these queries. I think I I see people who that has really started to damage. Um now I think maybe it's fair to say like bad SEO is incompatible with a good SEO is very compatible. Yeah, I I can understand that. I I guess the way I would look at um sort of like geo and and and um I I I do totally accept that you know using words like rank, right? Um I've spent 26 years of using the word rank and I'm um I haven't got used to saying cited yet.
And so the way I look at it is sort of like let's say I need to be in the top 10 um SEOs to follow on Reddit for example or top 10 blogs. Uh so I think actually uh one of the ones I got up for for Edward was like top YouTube SEO podcasts, right? And the way I would go about breaking out how I'd get visible is I would put in the prompt or I'd put in the basic keywords of the prompt, right? Because I I imagine the prompt could get, you know, very personal or get very spirious like, hey, I'm really want to know more about SEO.
Um where are who are some of the top SEO podcasts I should be following? And I would imagine that the query fan out takes away a lot of like who I am and background stuff and just breaks it down to a query. That query I think largely gets sent to Google regardless except for Claude and Brave search. But I think for Grock, Chat, GPT, Perplexity, and Gemini. So that's why I think SEO is so important because if if we're not in that top 10 result coming back from Google, then we're not going to be included in the synthesized result.
So if I want to get So what I often do is I look at the SER result or the fan out sorry the synthesized result. So if I started with like who are the top SEO blogs I should follow, it would come back with like Google's blog, AF's blog, Simr, you know, the usual typical big brands and I normally put a screenshot of that on X and I say, okay, look, this is what I'm going to try and hack next. And then I'll go and write a blog post. Hey, who are the top YouTube SEO podcasts you should follow in 2026, for example.
And then I'll just create I'll maybe I'll copy and paste a list or I'll just make up my own list and I'll put myself in, I'll put Ed in. And then once it gets indexed by Google, when I rerun that that prompt, the results of the query fan art change. And if I'm in that top 10, suddenly the list of of recommended brands changes and it shifts completely. And suddenly I see myself or SC Ed or I'll see um you know, whoever else being added to that list. And then I can repeat that process and then I can look at the sub queries or I can see I can come back and check another day or I can check another LLM and see if they modify it.
So like instead of saying best SEO podcast, they might say like best SEO channels on YouTube and then I'll have to decide in Google is channel and podcast synonymous and if it is I could keep it or I can add a H2 and I don't necessarily have to build out more content. However, if it treats this semantically different then I might have to either alter the page title so I could also rank for that. And then as long as I can consistently come up in the 10 results or top 20 results that are included in the synthesis, then I can effectively insert my brand into that conversation.
Think the ch is the change are you not still selecting quite a thin thin sample of queries and then you're optimizing for those? But I guess so maybe you should say so one of the like bits of napkin math that I've done is when you take like a keyword and then look at how many different prompts that can produce right so a keyword it's short it forces the user to kind of consolidate their intent into a short string and therefore in order to track prompts we kind of do the same and we kind of condense a prompt into a keyword because then we can track it.
Um, but when you actually look at how users prompt, it's paragraphs, right? It it's they add criteria, they add their job ro, understanding of their persona, it carries 6 months of config beforehand. Um, so when you kind of look at how that can translate like one keyword can become 22 and a half thousand different prompts. Now I think one of the challenges is I don't doubt that it's particularly where search where retrieval is being invoked. You can find 10 of those which reflect a particular language set and kind of hack those right optimize for those.
But the challenge is you don't understand the knock-on effect that you're having on the other 20 can't do my math 2,490 or what whatever the right number is. And I think that's where there's a lot of and it's going to vary. Sometimes that will be a problem, sometimes it won't, right? Sometimes they'll all reflect that. But there's a lot of risk that you optimize for this fine set of prompts that you've selected and what you miss is what's happening across all of the other ones plus what's happening when when retrieval didn't get invoked in the first place.
Yes. So I think I come at it from the other side, right? Um I look at a a series of prompts and I and they effectively break down to the same query, right? So, for example, if I'm asking about the top 10 SEO agencies in New York, I'm going to get SEO in New York. Um, sometimes like if I if I've been chatting and I I start talking about SEO and GEO or SEO and AI SEO for example, it might add GIO to the fan out, right? But essentially, whether I ask for the top 10 SEO experts or the top 15 SEO consultants or they all come down to the same query.
So, like you said, there could be 2,000 different prompts with one keyword. So, if I instead I'm not trying to rank for the prompt. I'm trying to rank for the keyword or the key phrase that is is um underpinning all of those. Uh so, like you said, even though the user has a lot of history or they've a very long flowery prompt like um I have an organization of 50 people um and I'm looking for the best um invoicing software, right? So, if I've got 50 people, what what's the best invoicing software? I'll see that it does a very lazy search and it'll do like best invoicing software platforms and then it'll uh do another search for like uh best um invoicing software for 50 people or for anme or something like that.
And I find that because not a lot of content exists for that because not a lot of people write that way. um it either comes back to the same results or it just um being found for the primary query um is is actually enough. And so I don't have to worry about all of the other things that match that sort of like um the larger text of the prompt if that makes sense. Like basically if you take 200 people trying to ask the same question about SAS billing software at the end of the day it has to get a list back from Google for the top 10 SAS billing um applications right and it it does have like an interesting impact cuz like one of the things I noticed and and I've I've said this a lot on X is that if I say for example I want to know who are the top 10 SEOs to follow on Reddit, it doesn't sort of like go to Reddit and a lot of a lot of a uh geo Um, agency owners talk about, you know, it uses Reddit to trust or it sites Reddit out of trust.
Um, and I find that interesting because it's not like it's aically going to Reddit and going through all of the forums and adding up all of the top votes or who gets mentioned the most. If you if you ask it for like who are the top uh SEOs to follow on Reddit, the query fan out to Google is literally top SEOs to follow on Reddit. And the top 10 results might only include two from Reddit and then three might be a blog post and then two might be a Kora entry, right? So, and then some of those SEOs might not even be on Reddit.
Like for example, um some of the SEOs recommended to follow the most might be John Mueller or um Barry Schwarz uh not very active on Reddit. um or if there's actually a a subreddit or sorry thread on Reddit like top um SEOs to follow, it'll bring that back and then it'll just count the users that are listed in that thread and say here are the top users and then I find if I if I know what that query is and I can then either post a new thread or write a new blog post and rank for it, I can then change the constituent names cuz like you said it the the LLMs definitely don't seem to be doing any research about it, right?
they just sort of like say take those pages back from the index and take them at face value which I think talks about a lot but I think that's the bit that I would push back on a little bit is that cuz I so I think there's one thing that we we definitely agree on and I can see that when we I think what a lot of this comes back to actually is how material the criteria so when you talk about okay a more complex prompt are the criteria that are put into that complex prompt material to the answer.
So to take that example potentially of okay the best SEO consultant, you can add flowers around it, but if the flowers aren't really meaty things that change who the best SEO consultant is, then it's not going to meaningfully change the answer. And I do come at a lot of we we specialize in doing this for B2B brands specifically. More complex the sales cycle, the more messy, more kind of invisible, the the the more that we get interested, I guess. So that that's probably a caveat that's worth applying to everything I say. Um, but when the criteria are meaningful, I'm looking for a bidding solution.
I'm the CFO at a 50 person company. This is our tech stack. This is really important to me. I need to get my CMO on board. All of this stuff that comes into it, then it is going to really make an impact, right? I you can see it very clearly. Different different lists come back, so to speak. Um, if it was a list. Um, so that I think is where there's an element of agreement. Uh the the the thing I don't I don't fully agree with though is the idea that okay it needs to get a list back from Google.
I mean that that's not the way that an LLM works. Um they they they understand they typically rely on their own ingrained knowledge, right? They they form an understanding of the relationship between lots of different entities and wherever possible they are answering based on that. So they're not a kind of personal agent that takes your query and says let me go find an answer for you. Um and I think that's where maybe the the the difference lies most of the time. and from our data really whenever they can get away with it they are going to answer based on what they know they're going to explore freely and that's why a lot of mistakes come in if you're not clear and things like that so that that's the behavior that I think I don't I don't agree with I suppose right because um so I think at the start um and and I often joke that that some of the different um the different stumbling blocks that have befallen open AI right I think one of the worst things to happen to them is they got handcuffed to Microsoft's legal team right that saw them, you know, Google beat them to the to the race with the with with some acquisitions and so on.
Um, and I think also that Bing is just such a terrible Google clone. And I see a lot of people talking about, you know, um, AI establishing its own entity and it, you know, its own understanding of relationships. And, uh, to sort of like test that hypothesis, what I've done is done the the the initial prompt, right? So, hey, who are the top SEO agencies? Who are the top fintexs? Who are the top banks for startups? Who are the top um VPN replacements, Ctna replacements, whatever? All of the different things that I work in. And where we're not visible, if we go and publish something, get into Google, then we're visible within minutes in most of the LLMs with chat GPT being the slowest.
Right? So, if I published a blog post today, and I I don't have the world's biggest blog. I'm I'm I'm sort of like a single consultant. I write like I have time for maybe like six or seven posts a week type thing. And if I write a blog post now at say 5:00 Eastern and I'll go off uh for dinner and I'll come back at 6:00. If I'm in that top 10, then Gro Perplexity um will definitely quote me. So it's a bit difficult to see how they've got a pre-established um understanding of all the brands from training if I can enter the the um synthesis today right like in an hour or in 10 minutes in some cases right that's you can you can influence it very quickly when it invokes retrieval but I think the we keep coming back to just these very like narrow prompts which is like best SEO consultant or something like that right and then yeah it goes and it validates and it looks for answers for that so that like that's the conversion stage but I I guess our point is it's not a funnel, it's an iceberg, right?
It's that is the tip of the iceberg. So if you're looking at just optimizing visibility at the point of conversion in a complex category, you are missing well actually 84% of the opportunity that AEO presents because what's going to have a bigger impact a marginal gain in visibility on a bottom of funnel query or actually impacting the way that the LLM and through high information gain content things like that less hacky roots but more research quality stuff that changes the way that they understand your category as well as your brand what you're good at and bad and therefore they funnel more users because that bear in mind that bottom of funnel query that someone put in is no longer an isolated event.
It's the product of 6 months of conversation with the LLM. So how do we know why do they put in best SEO consultant versus best GEO consultant versus AEO consultant? Well, that's because of all the work that you've done previously to try and get the AI to shovel them in your direction. And that's the real opportunity is using it to influence your category and position your brand within that. I always say in such a way as makes visibility inevitable. Take demand genius where we've talked about our kind of dark AI is what we call this piece of research for.
We would love that to become embedded in the LLM's understanding of the AO category that this thing exists called dark AI. That's going to help us so much more than being on the the pro proverbial front page, right? Because then suddenly people are like, "How can I track dark AI?" and then we're obvious. Um, so that that that's where I think particularly in complex categories, the opportunity at the moment is being missed. Um, and it's just a very different opportunity that requires a very different set of of practices. So, so how would you get into that sort of like data set or learning?
Um, and and how often do the LLMs get trained on that? So, the training is infrequent and it varies model by model and you are a bit more reliant on their updates. It's going to be interesting to see how that evolves actually. Um, that's where it comes back to like content, positioning, reputation. They're the three things at your disposal to try and influence it in terms of how they think about you. The other thing that I'm a huge if if I can invest in one thing, well actually I can and and I do, it's information gain.
If you are so content that the traditional like search and content playbook because you find a high volume or high intent query, you summarize knowledge against that query and the best summary wins, right? That that's search in it in quite a basic nutshell. Um that's not a very useful function in the world of AI because I could go and get chaptt to summarize absolutely any piece of knowledge that we already have as a human as humanity um for me on a onetoone basis. what really makes you citable and that's the thing I always encourage people to focus on is don't worry about citations worrying about whether you're citable um is high information gain content original research surveys we have a kind of three-step framework which is you have um interpretive information gain so a new slant on an existing topic um empirical information gain so brand new data original research surveys and conceptual information gain which is kind of when that original research or data has led to something that genuinely moves understanding forward.
Now you can build a brand's positioning around conceptual information gain for 5 years or you could probably now about 6 months. Um but that that so that's what we encourage people to do is try and climb that ladder and by doing that you create justification. You don't just make yourself visible, you embed why you are the right choice into the very foundation of these models and how they understand your category. Um are you saying as as part of a regular content strategy? So that's like an basically you talk about content that you put on your website type thing.
Yeah. So that's what I think I think changing the focus of our energy away from the optimization hacks. Um and I don't mean that as kind of disparaging as it might sound but away from the optimization and towards the research the the groundwork that sits behind the content and that will help throughout the funnel lane. you're then addressing every part of the iceberg um not just the tip of the iceberg. I see information gain um it's a topic that's thrown around in in um SEO a lot. I see a couple of I see a lot of similarities in terms of like schema uh information gain eat um and schema entity which are in some cases 12y old concepts in SEO that have like resurfaced in AO and I sort of often challenge people that just because Google has a patent for information gain has it owns the patent for information gain I don't think it's a very usable patent because um it would just lead to people adding more and more information, right?
Like how does an LLM know that the information is pertinent or accurate? I mean, Google as we know is like content agnostic. Um, and so what you're talking about then, wouldn't wouldn't an LLM then need to have an almost sort of a database almost the same size as a Google if it's going to have all of that content from all of these companies, right? It it's I don't see the difference in that in Sorry, I it doesn't take the content. So it doesn't have a database of all the content. It learns what it can from the content.
It incorporates it into the training data. It like well sorry the content is the training data, right? It learns what it can from the content, incorporates it into its knowledge graph. It's its understanding of the world and the relationship between all the things in it. Um and then it does its best to answer out of that knowledge and that that's where things come up. So it'll associate Stripe with easy to use, good for startups, all of these different things. and only when it doesn't have much certainty there does it then go looking for knowledge. So I think that's it's not it's not built to index content.
It's not built to service surface content that that's it's not a directory. It sounds like so I'm just trying to wrap my head around like how that content is different from the content that surfaced in an index like Google's right because it it my maybe my understanding of LMS is way too simplistic where the training and huristics that it learns in training is like for example Gemini is trained on Reddit uh whereas for example um the other LMS don't actually have access to to Reddit um for their training um is sort of like how do I understand the you know vocabulary of humans?
How do I understand concepts? How do I understand um you know geography uh things like that learns patterns and then applies that to um essentially everything that has to be retrieved. So obviously you can sit and have a conversation with uh chat GPT about you know your feelings or maybe you've had a fight with your best friend or perhaps you want to impress your boss and of course then you might have like a you might trigger a prompt that then has to go and retrieve information but I would have thought for anything like around SAS or products or brands outside I can imagine the training for for chatbt including like who Coca-Cola is or Hoover right or NASA right because those are um almost concepts of their own, right?
They're so big. But I would imagine like any other non-fortune 500 company or or or something that's not in the encyclopedia uh would have to be retrieved. Um I mean, do you know what level of scale a pretty broad knowledge, right? So we're we're a startup, but it didn't take long for you could tell me about demand genius. You can still do it and it and it and it has the false associations actually. It's it's a it's deeply ironic and embarrassing. Uh but so so it has a pretty broad knowledge. The best way I can describe like cuz I think you made a good example there of like how humans understand stuff.
Um we build up all of these knowledge knowledge in a in a in a broadly similar way. And that's what LLM are designed to do. They're designed not to index information but to acrue knowledge. Um but we can't go back and site where we got it all from. we can when it's meaningful, when it's really interesting, it's like, oh, I read this really cool thing a couple of weeks ago. Um, and then the citation becomes relevant and we might mention it. When we want to be really credible and really certain, we're going to go and search out seek out the citation because we know that, okay, this is like when you're kind of answer like, oh, I'm pretty sure, don't quote me on this.
That's what are kind of doing at the top of the crudely speaking in in awareness stage. Um, and then further down they they become a lot more uh they they go looking for the information to to search for it. Um, sorry to to to improve their confidence. So I think that's a really good analogy for how they understand things. Um, but that's where for the vast majority and this is what our research showed and like 84% of the prompts we ran that was what they were doing. They it was talking from their ingrained understanding and that's what's actually really hard to change.
And that's where I come back to like where's the impact to be had? Take demand genius. Is it from being um visible on those uh kind of bottom of funnel prompts or is it actually correcting it so that when the CFO who's about to sign off on buying demand genius goes in and tell me about Demand Genius RCM that it says back the right thing and it says fantastic value for money. It's great for CFOs like you. Go for it. Um, and that's the more important thing and particularly in anything complex that's actually I think the most um urgent thing for people to fix.
Feel like I've answered a round of question there. So tell me if I've not answered. Yeah. Yeah. So I I guess what I'm trying to get to is sort of the granularity of like what's in the ingrained and what does it have to go search for? Where where where does that split? Right. Um because if I go to and like I said I I encourage people use perplexity more than chat GP. I think it's more um it's more of an honest agent in terms of like when it has to query fan out. For example, if I use Claude or Perplexity to do things like work out the pacing for my budget, right?
Let's say I'm doing my paid search and I want to set up a campaign and I'm on the, you know, the 23rd of April. If I'm spending, you know, how much have I spent per day and then how how much am I on target to spend for the end of the quarter? um while it has like some basic understanding, it'll actually go off and learn Python and then it'll it'll actually look up how to do the equation to calculate it. Now, if I ask it again later and I just change the numbers, it doesn't have to go and do that anymore cuz it's just gone and and created out.
But h how what level of of going to brands? Because like essentially every brand out there, every company has its own website. So for this to be in their ingrained training is there a certain number of websites that they're trained on or is it the whole web um you know so you were saying for example if if you ask it about a certain brand it it can rely I think you said the top three you know awareness information consideration stages that's that's still a lot of websites that or a lot of information from a lot of websites to remember right it it seems to be a huge amount of memory that doesn't seem you If you look at what an like if you look at what an LLM has to learn in in in in terms of its heruristics, right?
So if Gemini is changed is trained on on on Reddit, it's got information about people talking about going to college um uh what a college is, how a college works, then you've got a list of all the colleges. Then you've got a list of all the courses that are in the colleges and then you've got a list of which is different to a list of courses that are generally available in colleges. Right? So is it the list of courses? Is it a list of all of the colleges? I mean, I think there's 35,000 colleges in the US.
And then if you add like, you know, 580,000 small businesses and then you add all of the businesses in the UK, for example, in Europe, eventually that just sounds like it's an entire copy of the entire web or at least, you know, and that that compute power with what an LLM needs to remember, which is almost every sentence or pattern it's trained on, like what is a credit card, how does an ATM work, right? th those those like functional questions, right? like how do I engage with knowledge and then having that knowledge built in that's what I'm trying to work out what where's the granularity between it because if I go to perplexity and say well who are the top companies in SAS it seems to run a query fan out if I there's a really really crucial difference there though because you said it yourself perplexity is not an LLM it's a wrapper so complex is yeah chat GPT is an LLM you are talking to the model as if you're talking to another basically sentient being.
It's AI and that's like yes, this is incomprehensibly cool and insane technology. Um there so that but there's a crucial difference. Perplexity is an agent for research. It is an agent that acts as an intermediary and that's why it does a it's a lot more transparent. It does the research job for you. It is Plexity is probably the only one actually that you could call an evolution of search. You had Yahoo, Google, Google's algorithm got steadily better and then Perplexity launched and it took it to another level because it's doing the search on your behalf.
With Chat GPT with Claude, it's not a search engine. You are talking to it. It has this knowledge. It m it has learned to map all of this knowledge, but all of human knowledge and it's done that by yes being trained on basically the whole of the internet um up to a certain cutoff. And there was a point like when Chach GBT first launched it was quite interesting because it was like trained up to 2023 and so there were certain things that it just doesn't know that you really expect it to know and that's why there's a whole lot of really interesting stuff going on in the media industry um which my bit of my background is in they're all like it's a huge huge problem there is how they protect their IP because obviously Tat Claude they all want to be indexing the latest news so that they remain up to date and they keep building all of that onto their their knowledge graph.
Um, but but that's their IP. So that's why there's a huge battle over that. They they they are hungry for knowledge and they just ingest more and more of it and and and map that into their overall understanding. It's difficult because it's two people who aren't data scientists talking about how AI works. Yeah. So I'm talking about I'm with Plexi. I'm including um you know uh Claude and and and Gemini as well, right? They're both trained on their own corpus. To me, the training is like how to use language. You know, what is a university?
What is a university course? It's not like a list of all the colleges, a list of all of the courses that they have cuz that's a ton of information that's always being changed. And so when when I mean perplexity seems to fan out, I don't mean that just because it's a rapper. I mean I I assume that it's trained on an LLM that is similarly trained as Gemini, right? Because it has to be able to understand the same language. But but let's take Gemini and Claude then again if I ask it the same questions like um who are the top SAS companies or what is a SAS comp?
So I I it probably doesn't have to query fan out for like what is a SAS but I would imagine once you get to any brands that apart from the very top like almost household names I would imagine all of that has to um has to be sent to rag you know has to be retrieved. That's not that's not the case. It it when in every new training window a whole load of new things come into its its understanding. Um now it it has the more again I'm going to come back to those three points.
The more data there is for it, the more they can use to pinpoint and get a really really clear understanding of something. So if you have lots of content that positions you perfectly, it's very clear, it is very consistent and it's backed up by your reputation, it has complete confidence and it will go see want to be careful. It appears that it will go to retrieval a lot less in that in in that example. The less data there is, the less certainty it has. It appears to invoke retrieval a little bit more. Um, but it also is is wrong a lot more, right?
So if you if you ask it about a very early startup, it'll often put it in like a a startup that launched 3 months ago, it'll be like don't know a lot about it. It seems to be in this category. Um and that's because it's kind of come into its awareness, but it's not built all of those associations. If you think about it as a as a kind of entity on a graph, there's lots of spokes coming off it. And it it doesn't have all of the spokes yet. it's not mapped all of the strengths and weaknesses, but kind of knows it exists and it's out of London and it does this.
Um, so that's what your job is to kind of train it and build the right associations, but but then how do you train it? I mean, how do you make sure that you're um because you you're saying things like knowledge graph and entities, but these are I mean, these are largely Google terms, right? Um, you know, knowledge graph is is Google's knowledge graph. Um, technology. Sure. So are you saying then that um chatbt has to like how does it decide then in the last training it didn't include a website or a startup. How does it now decide to include that startup?
How do you like I mean I would assume that the training which I understand costs like quite a lot of money as well. I think it costs like a couple of billion to train LM. H how does how do you get in front of that? How do you get yourself inserted into that corpus of training? Like I know Gemini isn't trained on the whole web. It's trained on Reddit and then it uses Google's um uh index. Um um I want to I want to actually jump off this and switch into like implementation. So like D to jump off what David was was saying.
So let you're engaged. You get engaged by a brand. So then what are like maybe you perform an audit. What are the action like what does an audit look like if you do that? And then what do what actions do you take to increase um visibility in LLMs and then and then maybe we could have an interesting discussion around around that. Are we talking about demand genius here? Yeah. Well yeah. Okay. So I mean typically the the first step of an engagement is kind of two things. The first is understand what AI thinks about them.
So visibility is one component of that, right? And that's what what most brands with a kind of SEO mindset will do is you run you agree on what's a a good representative list of prompts that we do want to show up for the tip of the iceberg. Obviously you want to be visible there. Um and are we visible? We also try and help them understand okay what does the AI and we have some really cool sentiment analysis models and things like that. What does the AI think of us? If we ask it about us, we ask it about our competitors relative to lots of different criteria that might be applied, ease of integration, all of these other things.
What does it think we're good at? What do they think we're bad at? What are the trade-offs that it presents? We then help them map that to their different segments, their different stakeholders that they have um that might be in the buying committee. So, what does a CFO value, what does an CMO value, a CTO, so on and so forth. um that allows us to paint a really clear picture of okay a CTO it's going to be told that you're good at this good at this bad at this okay at this that's going to be much more predictive across 22 and a half thousand prompt variations than taking a sample of of 10 and kind of looking at whether you showed up for that so that's that's and I will say we don't have all of the answers here right this is brand new but that's the best way that we have developed to get an accur accurate representation of how AI is presenting you across all of the conversations even when you're not directly cited, you're not linked, um things like that.
There's some other cool stuff that we can do even just to to make the visibility tracking a little bit more sophisticated like looking for custom entities. So not just don't just look for your brand. If we take demand genius as an example, look for dark AI content that terminology that you use that you want to be ingrained as part of the narrative and then you can see okay is that being mentioned in AI conversations and actually that's likely to have a much more of an impact and as we get more data I really want to understand the correlation between that and overall revenue that you get from AEO.
Um but but I don't have data on that yet. Um so that's one thing. The other thing is on the the content side. Yeah, we we use AI agents to analyze your content across quality. So, information gain, currency, consistency, clarity, all of those things that I talked about. Um, and do really deep qualitative analysis at scale that allows you to quantify all of those things as measurable KPIs. So, your KPIs don't just have to be around traffic and and and things like that. You can actually use AI to do subjective analysis at scale, score pieces of content, and say, "Okay, across the across your whole website, your information gain score is 1.4." Um, you can set KPIs that we're going to up that to 2.3 over the course of the next 6 months.
You can do the same with consistency. So, how consistent is um all of your new content, oh, sorry, all of your old content against your new product descriptions since you pivoted, things like that. Again, you want to bring those two things closer align. So that would give you these qualitative things as KPIs that you can track and and understand where you are. So that's the like getting to getting to grips with where you are. Um and then from there it becomes a lot more about producing content, getting reviews, all of the same kind of stuff that we can do.
And there are only so many ways to to do any of this, but crucially not with the intent of hitting particular keywords. generally with with some exceptions where there are key terms that you really want to try and signal that you're um a good fit for. It's more about how do we influence our overall corpus to correct this perspective um and so on. So is that um when when you look at the information talk us through what you mean by information gain. I'd love to hear more about that. Yeah. And I think maybe there's a slightly different definition cuz I um when you say Google has pat did you say patented or they have a patent uh on information called information gain?
Yeah. Okay. I should stop treading on that toes then I I wasn't aware of that. We use it as just a descriptive term um to uh describe like I I I define information gain as net new knowledge that you didn't that we didn't have before. Right? So rather than summarizing knowledge you produce net new understanding. And that seems to be really really kind of predictive of whether you're cited by AI and also just whether you're valuable to humans. And the at the end of the day goal of this is to influence a human at the end of it.
Um so that's where we score them on those three level levels that I I talked about earlier based on the quality and depth of the new knowledge that their content is creating. Um and you want to do that ideally in every piece of content, but certainly across the site. And that seems to to to help make you citation worthy. So, so are you saying then that the that you essentially become part of you you're sort of like a chapter of explaining a topic to an LLM kind of thing or Yeah, I guess and this is where there's I think and again you're the SEO expert in the in in the room, but this is where I think there's a lot of crossover with with SEO, right?
you're trying to build domain authority um and and demonstrate clearly that you really understand. But you I guess the difference I think is that you don't do that by summarizing knowledge really well. You do that by moving knowledge forward. Um and I think that's where that's where I I think that the approach of this as a SEO problem is unhelpful because I think that it prevents you from taking that approach. you look for the visibility hacks more than building your team in such a way as you can actually move knowledge forward and then communicate that consistently.
Um I I think of it a lot more as either like an a sales enablement type problem. You're trying to teach and correct and and put the right story out there, a narrative problem. Um, and actually there's a lot of PR not just in the discipline, but if you think of AI as a kind of active participant in the market, you're trying to control your reputation and how that participant speaks about you and and that and it is very much that it's a kind of living entity. How would you test uh what an LLM thinks of your brand?
Is that something that people can do at home or uh yeah the risk is that like ask it uh it's at a very basic level and that's what it always baffles me a little bit. You you get a lot of people that are like, "We've been trying to figure out this AI and they haven't been through and just asking about themselves in a way that tries to simulate what a customer on day one of starting to think about their problem space all the way through to day 500 of starting to think about their problem space would be asking, right?
Like, how do I take payments? How do I all of these things that lead you up to buying a billing system?" Um and you can see how it presents the narrative, how it presents you. Um the benefit of using a platform like Demand Genius or is that we run pretty sophisticated sentiment analysis on them like AI is very I joke that it's very American and positive. We we apply some British skepticism, which is where you can use the like sentiment analysis to say, okay, it's generally speaking, if you ask it about you, it says nice things.
It's kind of like it says nice stuff to your face. Um, but you can look at the the the language that it uses and just how nice those things are, how positive it is to to kind of get a sense of how how strong the true perception is. Yeah. Ask it about you and there are different levels of analysis that you can then apply to the response which will give you different degrees of insight is the short answer. And how do you Mhm. And how do you help clients understand um you know where how many prompts are happening or what prompts are happening or um you know getting information on how people are are asking about them or is that possible or we don't um I think I know there are so business like profound and stuff like that have tried to build up big databases by kind of trying to understand like okay these are and and I think there's absolutely a place for right?
It's the kind of SER data on what people are querying in LMS. That's that's helpful. It requires a very big database. It's a different problem to the one that we solve. That's kind of what are people asking versus how does it present you. Um but but that that's not a part of what we do although it is obviously useful, right? And and a part of it is you try and you want to try and get into the mind of your customer. Like it's it's I think a lot of this comes back to marketing fundamentals, right?
like understand who you are and who you're for and really really understand your customer and their customer's journey and the questions that they have at each point and what they turn to at each point of that. Um, and that's what that kind of there's probably a better term for it that I don't know, but SER style data is very useful for because it gives you hints into that. But there's a lot of other things that give you hints into that as well like sales call data if you have it and if it's if that fits your motion, fantastic.
That's absolute gold mine. um and and you can get a lot from So, how do you know if you're increasing um your client's visibility or your your brand's invis visibility in in um LLMs? Partly by tracking. So, that I think there's two things, right? There's visibility and there's influence and they're slightly different things. Um so, visibility, the only way that I know is to to do the approach that most people do, right? And that that is a part of what we do. um it's to you run the prompts and you analyze them and you understand that they are um limited right you're not capturing the full picture but it's the best that we can do.
The other thing that we can do there is we try to map the what it thinks you're good at to what different people want and we think that is a lot more predictive of your visibility across all of these um different contractions and different conversations. Now, it's very difficult to prove that because they are invisible, dark conversations, right? Like um but but but if if it it it it seems to correlate pretty well with performance. Um so they're the the two things and the influence side is just to can you start to correct the narrative, right?
Can you start to influence what it thinks you're good at, what it thinks you're bad at? And your goal is obviously to make that align with your positioning. If you're Stripe and you're the payments platform for early stage companies, then you need it to think that you're easy to integrate. You need it to think that you're going to provide a frictionless checkout. If you're Zora or I don't know, like we're going to get into the depths of payment systems here, but you're much more enterprise focused, then the trade-offs are different. So, you're trying to correct the way that it understands the trade-off so that when it presents you to your buyer, and that's where a lot of it is about being specific and realistic.
It's going to present you in the way that you want, and it's going to come with a kind of thumbs up. You're a big company, a lot of complexity, a lot of different products. Zora is actually better for you than Stripe or whatever the comparison would be. And how how what would you say to companies thinking about um AI and LLM strategies? Would you say um how fast would you say this this market is growing? How how quickly do you think they have to start thinking about um understanding their strategy? I think it's grow. I mean it's growing insanely fast.
Um and it's it's one of the reasons why it's so fun to work in it, right? Because you're you're constantly having conversations like this where we kind of learn together about it. Um I think the I tend to say that the like there's certain things that you need to do now which is to put into place which is build the fundamentals. So again like good marketing if you can't clearly state and tell me who you are, who you're for, what you're good at and what you're bad at, what the trade-offs are, what you want out there, then it's going to be very hard to do any of that.
So that's the first bit and you need to have that documented for people and actually have that all in markdown so that you can feed that into AI. it will make everything run smoother. Um, I think that's an important start. And then in terms of the AEO specific problem, um, the biggest thing I encourage people to do is treat it as a discipline, not a project. I think it's very tempting. Everyone wants everything to be a project, right? Three months, cool, we've done AEO. Um, it is a discipline. You're going to have to commit experimentation time to it.
You're going to have to basically carve out a new role. That's that's one of the biggest things we bump up against is it's really really really hard at the moment for a CMO to carve out new headcount because the narrative out there is that content's a commodity and get rid of all your people, you'll get AI to do it instead. And we're like, no, you actually need a person who does this. Um, and think about what the skill set is. And a part of that is some of the SEO skill set, but also a lot of it goes into other stuff.
So, they're the two like strategic bits of advice that I would give someone as well as probably establish like information. Think about think about why someone should site you. And if you don't have that in your business, you don't have a content IP, you don't have your perspective, then again, nothing else is is is really going to be that effective in the long run. Um, so what's what's markdown? Um, and how is that applicable? Sorry. Markdown is it's a very basically those very basic document formats um like a txt document um which just contains no fluff just text and it's it's really easy for machines to read um it's what basically people it's like when people do the kind of structured schemas on their website to try and make it easy easier for AI to get which I've not seen a lot of evidence for working but it's basically doing that and it it it allows you to get a lot more information into a context window.
So, one of the biggest limiting factors with producing good content with AI um is context, right? You can only feed it so much. Um so, if you have all of your documents compressed into markdown, you just leave more context available. So, kind of we tend to recommend this is just when you this is seeping a little bit away from AO in just general how to get the most out of AI in marketing. But if you've got all of this really really clearly defined and I'll come back to my information gain fresh uh kind of framework.
We did we've done a lot of experimentation on our side to see like okay can we get AI to produce information gain and what we found is if you if you're taking a very simplistic approach where it's like prompt spit out content no you can't. And that's by definition AI is trained on all of knowledge and so it can't reproduce originality really that that's what it can't do. If you if you feed in a perspective really well, you can get it to produce level one information gain, interpretive gain, you can get it to apply that perspective to that knowledge and spit out something pretty good.
It's going to have the AI tells and so we don't recommend just going and publishing that, but but it it the meat of it will be good. Um, and and still has value and can inform the AI as to that perspective and it kind of incorporates that into what it thinks. If you can feed in really quality research as well, then it gets really really good and you can produce level two and three. And and so when we're thinking about how you can structure your organization to make the most out of AI, getting as much of your perspective, as much of your ICP, as much of who you're for, who you're not for down into that markdown format can be really useful.
And where would you share that? Would that just be a page on your website or for me? That that's as an internal tool. So we that that that's for our our internal operations. Um so that that just lives as a file and then cloud co-work can go and and pick it up, but it also is something when you when you're if you're just using chat GPT in the UI, you can paste that you can add that in as context and it's it's just a very efficient way. So it's not a game changer, but it's an efficient way to get context in rather than giving it a PDF with loads of design and fluffy language that AI doesn't need.
Um it's that was one of my questions to you actually cuz I think you live in more that world. Like I've not I've always been very skeptical of the idea of like a a structured schema for AI. To me, AI is designed to turn unstructured data into sorry, unstructured content and words and language into data. And I've not seen any evidence that creating like an AI version of your website actually works. I don't know if you have a perspective on…
Transcript truncated. Watch the full video for the complete content.
More from Edward Sturm
Get daily recaps from
Edward Sturm
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.









