How to Improve LLM Visibility | Ryan Law
Chapters8
Host welcomes the audience and introduces the focus on improving visibility to LLMs and leveraging both on-page and off-page factors.
Boost your brand’s visibility in LLMs by building offsite mentions, optimizing for retrieval (RAG), and creating authoritative content your topics trust, not by hacky on-page tricks.
Summary
Ryan Law of Ahrefs lays out a practical, first-principles approach to getting your brand mentioned by large language models. He argues that models like GPT-4, Gemini, and Claude mainly rely on training data provenance and retrieval augmented generation (RAG) to surface information. The core takeaway is to focus on offsite mentions and on-page content that LLMs prefer to cite, such as research, video, and product-comparison content, while avoiding spammy tactics. He describes how LLMs pull from a mix of sources—including user-generated sites, review platforms, YouTube, and Wikipedia—and how you can influence this by ensuring your brand appears in high-signal domains. He also shares concrete tactics: monitor hallucinated URLs, optimize key page types, close entity gaps, and anticipate fanout queries that LLMs might generate to answer complex prompts. Throughout, Law emphasizes that best practices can shift with model updates, so a skeptical, experimentation-friendly mindset is essential. The talk culminates in pragmatic advice for brands of all sizes to participate in the AI-era via offsite visibility and high-quality, verifiable content.
Key Takeaways
- Branded offsite mentions on high-signal domains (Reddit, G2, YouTube, Forbes) correlate most with AI overview appearances, more than traditional on-site SEO metrics.
- Retrieval augmented generation (RAG) is central to LLMs; improving brand visibility in live indexes like Bing and Google search increases LLM citations.
- Create and maintain authoritative content—especially research reports and direct product comparisons—that LLMs frequently cite in outputs.
- Monitor and fix hallucinated URLs in analytics (e.g., non-existent hrefs.com/keywords) by redirecting or blocking those hits to preserve trust and attribution.
- Bluff-style, declarative, concise writing helps LLMs pull accurate facts; simplify sentence structure and maximize entity richness to improve contextual understanding.
- Fanout queries explain why long prompts yield related but shorter subqueries; optimize content for likely fanout terms related to your products or topics.
- Video content and YouTube transcripts are valuable for LLM visibility; YouTube presence significantly increases brand mentions across AI outputs.
Who Is This For?
Content marketers, SEOs, and growth teams at B2B brands, charities, and small-to-mid sized companies who want to stay visible in AI-driven search and AI-assisted content. It’s especially valuable for teams looking to shift toward offsite authority and high-quality, research-backed content rather than solely chasing traditional rankings.
Notable Quotes
"AI is changing the fabric of how we do SEO and marketing."
—Ryan frames the broader shift toward AI-driven visibility.
"The single most important thing you can do to improve your visibility in LLM outputs is building offsite mentions."
—Core takeaway about offsite signals.
"Fan out queries are the shorter, related subtopics that LLMs search for to answer a long prompt."
—Explains why content should anticipate subtopics.
"If you can get your brand mentioned on YouTube, Reddit, and other cited domains, you dramatically increase the likelihood of LLMs mentioning you."
—Practical sites to target for offsite visibility.
"Bluff—opening with your biggest ideas and using declarative sentences—helps LLMs understand and cite your content."
—Writing style guidance for LLM-friendly content.
Questions This Video Answers
- How can I measure offsite mentions impact on AI overviews and LLM citations?
- Which domains matter most for LLM visibility and why Reddit, YouTube, and review sites?
- Should small brands gate content to block AI crawlers, or is offsite presence more effective for AI visibility?
- What is fanout query optimization and how can I create content to rank for those terms?
- Is LL M.txt worth implementing in 2024-2025, and what are its limitations?
LLM visibilityRetrieval Augmented Generation (RAG)Offsite SEO for AIBranded mentionsFanout queriesEntity gaps and knowledge graphsHallucinated URLsYouTube and video SEO for LLMsLLM.txt (conceptual)Content formats: research, product comparisons, blog guides
Full Transcript
Hello everyone. Welcome to today's webinar. We have a wonderfully packed audience today. Everyone can shout out where you guys are logging in from. Awesome. Yeah. Wow. What a wonderful spread of places that everyone's coming in from. Fantastic. So, we're just going to get started here. We'll wait for a couple minutes as everyone starts uh joining in. We are at This is actually probably the highest attendance we've gotten for webinars. So, we're so thankful for everyone for joining us today from wherever you are. Interested to know about a very hot topic about how to be more visible to LLMs?
Be mentioned more by LLMs. There's too many people. I can't take the pressure. I have to bail immediately. No, no. All good. Yeah. All good. Okay, someone's already raised some hands. So, um I'm going to go ahead and start here. Uh and I'll let you know for people who raise their hands, we have an Q&A section. So, um if you have any questions unless they're urgent or in the middle like they seem right to answer in the middle of the presentation, we'll go ahead and answer that. But otherwise, we'll hold all questions to the end of the presentation.
Um, so welcome everyone to today's Hrefs webinar. I'm your host, Constance, uh, product marketer here at Hrefs and today our webinar is about how to improve LLM visibility. Ryan is here to help make sense of this new era of brand discovery and will be what you should be focusing your time and energy on if you want to come out ahead and being more visible to LLMs. So Ryan's our director of content marketing here at HRS and he and the blog team have been hard at work releasing study after study and sometimes hot takes as well and helpful guides about how to like orientate ourselves in this new marketing landscape.
Is search more important? Uh is um like AI generative search um important and how to balance both? and he've also recently been on the HRS podcast showcasing the way that he uses AI to create high quality content. Um if you have any questions for Ryan, we do have as mentioned the Q&A section that you can um put your questions in. Uh and most of them we will try to answer them at the end of the presentation. Uh if there's anything I think urgent or or I will keep a lookout and we'll quickly answer that. But ma mainly all the questions will be answered at the end.
Okay. Uh without further ado, I'll hand the mic over to Ryan whenever you're ready. Yeah, lovely to see everyone. Thanks for coming. Uh obviously, yeah, AI is the well, it's hard to escape the topic of AI at the moment. It's changing the fabric of how we do SEO and marketing. Uh so one of the things we're doing on the blog is just trying to understand this, right? Put some practical advice together for how you can get your brands to be visible within LLM outputs. uh and that is exactly the point of the presentation I put together today.
How to improve LLM visibility. So let's have a look. So some caveats before we get started. Very important. Uh first and foremost, I am not a marketer. Uh no, I am I am a marketer. That's the important thing. I'm not a data scientist. Uh a lot of these topics kind of stray into the slightly technical realm of LLMs, model optimization, all this kind of thing. Uh although we have a data science team at Hrefs, I get to bug with lots of annoying questions. Fundamentally, I am still a marketer. So just keep that in mind. Uh I also do not have all the answers to LLM optimization.
Far from it. Uh I think nobody has all the answers and you should probably be quite skeptical of people that do claim to have all the answers and present things with lots and lots of certainty. Particularly because so much of this stuff is changing and changing very very quickly at the moment. things that seem like best practices now can very quickly become not best practices in a couple of weeks, a couple of months after a model update, all those kinds of things. But with those caveats out of the way, I still think there is a lot of very practical stuff we can do to improve how your brand, how your company appears in LLM outputs, be it chat GPT or AI overviews or C-pilot or Gemini, any of these models.
And the way I've been thinking about this for Hrefs and for some of the customers we work with is a kind of first principles approach to LLM optimization. I think there are fundamentally three distinct ways we can improve how often and how frequently our brands get mentioned in AI conversations and it's worth considering all three of these separately. The first of which is increasing your visibility in training data. So obviously we know that these base models are all trained on vast corpasses of internet data uh the common crawl data set, Wikipedia, Google books and just scraping the web.
If you can increase the frequency with which your brand appears within that training data, if you can ensure it appears in useful contexts as well, you can theoretically improve your visibility in all outputs generally. And we know that these LLMs do periodically update their knowledge. So there's a chance for you to improve every 6 months or so your visibility within that data set. Uh GPT5's data cutoff, it knowledge cutoff was September of 2024. Gemini and Claude Opus January 2025. So they update these semi-regularly. Now the issue with that is it's something we don't have a lot of control over.
uh we can't just change the data sources that these companies scrape and collect. We have limited ability to actually influence this. What we do have much more control over is this second idea, increasing our visibility in rag. So, one of the ways LLMs overcome their fixed knowledge uh is they go out into the world and they actually find new sources of information and they synthesize that into their responses to users through a process known as retrieval augmented generation. Uh, and this is actually fantastic news for anyone that's ever done SEO or content marketing because this is a very familiar process.
This is basically just doing SEO. uh all of these different uh LLMs, they use different search indexes to go out and retrieve information and bring it into their conversations. So, uh Bing is used by Copilot. They're both Microsoft products. Google search is used by Gemini, AI overviews, AI mode because they are Google products. Uh Claude uses the Brave browser search index. ChatGpt and Meta now seem to use a hybrid of Bing and Google search indices. And Plexity has apparently built their own in-house index. But effectively, if you can make your brand and your content more visible in these traditional search indexes for the topics you care about, you will be become more visible in uh LLM outputs generally, which is fantastic.
And the third one which is worth covering I think just to understand the total like information horizon here. You can also do some black hat stuff, right? Uh this is what data scientists call adversarial examples. You can basically try and trick these models into recommending your brand in situations when they would not otherwise do it. And there are plenty of creative ways people are doing this. There's something called strategic text sequences, which is a kind of uh prompt injection attack. You can do entity stuffing, which is basically keyword stuffing. Uh filling all of your pages and uh websites with relevant words and topics that you'd like to appear for, even if it doesn't make sense to readers or users.
You can lie. You can make stuff up. There is no verification mechanism within LLM outputs. They generally take text at face value and you can spam. You can create thousands of pages often AI generated on any topic you like in an attempt to try and appear uh within those topics. Now, this is very short-sighted. This doesn't work particularly well and I think will not work very well in the future. But if you'd like to learn more about this and the limitations, we have a great article about that on the blog. We're going to focus on these first two today.
So, if I were to leave you with one single core takeaway from this, uh, this is probably the single most important thing you can do to improve your visibility in LLM outputs, building offsite mentions. I think we are entering the era of off-page SEO. Historically, we had quite a lot of control over how visible our website was uh in traditional search through just the pages we put on our website. They were very important. They are much less important when it comes to LLMs. Uh, one of the ways large language models understand your brand, your entities, what it what your company is about and when it should recommend you and talk about you and outputs is by understanding how many other places in its data set actually mention you and mention you in the correct context uh alongside words uh different topics, different product names, that kind of thing.
If you can get your brand and your products mentioned many, many times in many places in relevant context, you dramatically increase the likelihood that the LLM will recommend you on those topics. And unlike traditional SEO, these mentions don't need to be backl. They use the text on the page as their way of developing an understanding of your brand and how it functions. Uh there's a fantastic article by Gian Luca Fiorelli that I really recommend that explains the technical detail of how this works. So that's the theory. Uh one of the things we try and do at HFS is back up all of the theory with some actual real world research.
So we looked at the factors that correlate with appearances in AI overviews to see which ones had the greatest predictive power. And of all the things we studied from the number of pages on your website to the uh DR the domain rating of your website to the number of backlinks you had the thing that uh correlated most strongly was by far branded web mentions. So basically the amount of times your brand was mentioned on the internet in different relevant places that was a huge and very very strong correlation with the amount of times brands were referenced in AI overview outputs generally.
So the more times you can get yourself mentioned in relevant context in other places on the internet, the greater the likelihood you will be visible in LLM outputs. Now something very practical you can do to help with that. Uh you can try and get your brand mentioned on some of the most commonly cited domains on the internet. Different AI assistants do have slightly unique preferences in terms of the uh types of websites they tend to site and include in their answers, but there are some very strong commonalities between all of them. Userenerated content is an absolutely huge source of citation particularly uh Reddit, Cura and other sites of forums on the internet.
If you can get your brand mentioned on those, you stand a very good chance at being included uh in AI conversations. Same thing goes for third party review sites. Although G2 has very famously, very recently lost a lot of its organic traffic, it is still one of the most highly cited domains within uh a bunch of different AI assistants like chat GPT. Similarly for CNET and for Forbes and YouTube as well. Uh YouTube video content gets cited disproportionately often relative to its presence on the internet. Uh so your ability to be mentioned and talked about on YouTube has a big impact on your visibility in Uh interesting aside as well, Wikipedia is the inverse of this.
It actually does not get cited very often relative to how popular it is. probably because these models are actually trained on Wikipedia content. They don't feel the need to go out and find citations for topics that they already understand well because of Wikipedia. So we actually have something in brand radar which is our AI visibility tool that helps you find this very very quickly. Uh so you can use the top cited domains report and for any topic that you are interested in you can immediately get a list of the websites that are most frequently mentioned.
These are all websites that are great candidates for outreach for sponsorship for getting to know people who work there, for guest posting, any creative way you can to get your brand talked about or mentioned on those websites. Uh so I do something called wild camping where I go and sleep out under the stars periodically to escape my work and my family. Uh and if we look at that topic within the cited domains report these are the websites that most often appear in AI overviews and you can see the same report for chat GPT for Gemini for Perplexity.
If I have a website in this niche I want to go and get my brand talked about on YouTube on Reddit. Uh maybe I want to guest post on a website called Hip Camp. Maybe I want to have a uh affiliate link on the camping and caravanning club because these are the websites that most commonly get mentioned in LLM outputs. This is a hugely powerful thing to do. So something else we can do, we can optimize our LLM preferred content. So LLMs definitely have their own favorite types of pages that they like to send referral traffic to.
uh and some page types are just plain more likely to be cited by LLMs. Theory goes then that if we spend extra time and energy on those pages uh updating them, improving them, making sure they're consistent with our brand messaging, maybe even creating more of these types of pages, we stand a much better chance of uh getting referral traffic in particular from large language models. Important caveat obviously just looking at attributable traffic from LLMs is not the whole picture. The way most people use uh chat GBT for example, they're not clicking a bunch of links.
They are getting most of the information they need directly within the chat. But this is something we can see and we can optimize for. So I think it's definitely worth paying attention to this especially because uh as our research has shown AI traffic is growing very quickly at the moment, albeit from a small number to a slightly less small number. A bit of research we did, we actually analyzed the traffic received by 35,000 websites, particularly from Chat GBT, from Gemini, Perplexity, and other AI assistants like that. And we looked at the most common engrams found in the page URLs, basically the word fragments.
Uh, and you can kind of look at and discover some very obvious patterns in the types of pages that LLMs like to send traffic to. uh blog and guide were very common engrams in many of these URLs. They are sending a lot of traffic to blog content which is good. We already know about the value of creating educational content. Um interestingly as well best top and verses were extremely common. Uh comparison content on websites gets cited and referenced an awful lot. Uh this is something I've been thinking about. We need to have more direct product comparison content on our website to increase the chances that we will get cited and referenced within LLM conversations.
And LLM LLM love core site pages as well. Things like our contact us page, our product page, uh about us page, all these kinds of things. And something very interesting is if you compare this distribution of traffic from LLM sources to the distribution of pages that receive traffic from traditional search, you very quickly see some big differences relative to traditional organic search. LLMs vastly prefer your core website pages. So they send much more traffic to the kind of core infrastructure pages of your website much more than uh is found through organic search. Uh documents as well.
If you have any dusty uh PDFs indexed on your website, they get no search traffic whatsoever. There's actually quite a high likelihood that they will be found and referenced and included in AI conversations. Uh LLMs love research as well. One of the common reasons LLMs go out and site content and find pages to site is they want evidence and research to back up their claims. Uh this is something we do a lot. We create lots of original research and we get cited very often for doing that. and video content as well. Uh, LLMs still routinely site and send traffic to video content probably because they can't yet create video content themselves.
And inversely, there are some pages which are great for organic search traffic and terrible for LLM traffic. Uh, interactive tools, they get about half the traffic you would expect uh from LLMs because they're just not useful within a chat interface. And the same thing goes for listing collections as well like uh e-commerce navigation, breadcrumb pages, all these things that are very useful for helping searches on Google find your products. Not very useful in LLMs and they do not get much traffic as a result. Now you can actually try and optimize these pages. Uh, I've not included much detail here because I think a lot of this is very speculative and I would be wary of people telling you that you can do very concrete things to your pages and guarantee that they will appear more often.
It does not work like that. But from what we have seen, these are some things which may well be helpful and probably don't have a downside to doing. Probably the most concrete of these is updating your content, making it fresh. Uh, we did a big research study actually. We looked at uh 17 million citations and we found that chat GBT, Copilot, Geminy, Gemini, and Perplexity all prefer to site content that is much newer than the content that is traditionally found in search results. I think the takeaway here is not that you should go about updating every page, you know, every few days or anything like that.
I think this is more a reflection of how retrieval augmented generation works. uh rag is more likely to be triggered for topics that the LLMs don't know anything about and that is more likely to be content that is uh not included in their data set. So things that are fresher, more modern, more novel and more interesting. Uh but if you have pages that fit that criteria, it does seem that updating their uh publishing frequency, their updating frequency may help improve performance. I love this idea of bluff as well. bottom line up front. This is something that's great for readers and is great for LLMs trying to understand your content.
Uh generally open up with your biggest most important ideas as quickly and as early in the uh document as you possibly can. Similarly, this idea about declarative sentences. Uh the way rag works, LLMs are trying to find evidence. They're trying to find proof for the things they say and they seem to prefer things that sound confident. Uh so don't write uh but basically try and write in very confident declarative opinionated sentences that is more likely to be uh pulled into the the LLM than something that sounds a bit wishy-washy or a bit uncertain. Um also this idea of few subject predicate hops that's a very fancy pretentious way of saying keep your sentence structure as simple as you can.
Don't uh start an a a paragraph about an object, but don't mention that object until right at the end of the paragraph. It's hard for a reader to understand what you're writing about, and it's hard for an LLM to understand as well. Entity richness as well. This was an idea that I learned from uh Bernard Hang actually, founder of Clear Scope at one of the conferences HS Revolve that we ran uh last year. Uh LLM seem to prefer content that is entity rich. It looks for text that includes lots of entities in uh packed together into different paragraphs.
So mention different related products and topics as clearly and as often as you can in your writing. And lastly, this idea of global document context. If you have a very long document, it can be a good idea to periodically remind the reader and AI what the document is about. Provide some base context about what the uh article, what the PDF, what the resource is actually about. that helps both of those parties understand what their page is actually about. Now, a much more straightforward and simple tip here, monitor your hallucinated URLs. So, when we looked at the traffic uh HFS receives from all sources and from LLMs in particular, we found that 3.6% of all the visits from LLMs went to URLs that did not exist.
Uh, a really common one was hrefs.com/keywords. This is the type of page that you would maybe expect Hrefs to have. We write about keywords a lot, and it's certainly the type of page that LLMs expect HFS to have, but it does not exist. You can actually check and look for find these hallucinated URLs within your analytics and set up redirects for any of those that have repeated visits from LLMs. Um and very straightforward process for this actually. Um if whatever website analytics tool you use be it Google Analytics or our own web analytics tool you can filter to look at just the visits from AI assistants.
If you use J4 you'll need to use reax to filter traffic from sources like chat GPT or copilot. Uh web analytics has a dedicated built-in AI search filter. From there, just use chat dbt, your LLM of choice, and ask for an app script to return the HTTP status of any given URL uh in a Google sheet. You can save that as an app script. And you can then call that on your analytics data. And what you want to do at that point is filter to find any page that has a 404 status, which means it's not found.
it doesn't exist at the time of uh searching and receives, you know, 10 visitors or something more than that. Generally, all of these pages, certainly from when I did this on the HF's blog, are going to be hallucinated. These are all madeup URLs, and these were hundreds and hundreds of visits we were getting from people to these non-existent URLs. You could either create an 404 page that, you know, reference this this idea and links to some of your common popular content, your blog, that kind of thing. Or if a couple of pages really do get tons of visits, you can actually set up a redirect to the most relevant resource on your website.
A great way of capturing some of the interest you're already getting from large language models. Uh you can also optimize novel training data. So this goes back to the very first point we talked about about improving your visibility within training data. Uh LLMs train on some types of data that traditionally have had no impact on SEO. They were things that you know I as an SEO never considered, never thought about, but they are now actually relevant and can improve uh your visibility as a brand within LLMs. These are things like GitHub. Uh, GitHub is almost guaranteed to be trained upon by these large language models.
Historically, we never cared about that from a search perspective, but now making sure that your GitHub is accurate and consistent with your brand and maybe references some of your products where it makes sense, that is actually a useful thing to do. Wikipedia will always get trained on if you have a Wikipedia page or your brands do or your partners, making sure that you're referenced on those in an accurate, useful way, very beneficial. even research papers and patents that you've published or books that your company has published. These are things that do not normally impact SEO but do impact your visibility with LLMs.
It can further their understanding of your brand because they are trained on and learned from this data. Viewing these data sources as something to be improved is actually a beneficial and worthwhile thing to do. Now, now this is probably uh my favorite LLM tactic. This is something I think about a lot for Href. This is something I do a lot. Uh, plugging entity gaps. So, LLMs generally try and mention your brand based on its understanding of your entity relationships. Uh, in order to understand what a brand like HFS is about, when it should be mentioned, what topics it's authoritative on, it goes back to what we talked about earlier.
It looks at how you are mentioned in other places on the internet and on its training data to get an understanding of that. So the words used near your brand actually influence the LM's understanding of your brand. You may have seen these referred to as co-mentions. So a great example of this uh the brand Patagonia, outdoor lifestyle brand. Uh Patagonia is very closely associated with the product Nanopouff Hoodie because lots of places on the internet mention these two entities together in the same place. And that product, the nanopuff hoodie, is also very closely associated with entities like wild camping, backpacking, and lightweight because these get mentioned in very similar contexts throughout the internet.
This is how you begin to build this kind of knowledge graph of understanding of what a brand and its products are actually about. Now, it's very possible and I'd say even likely to have a disconnect between the topics you would like to be visible for and the topics you are actually visible for, your LLM's actual understanding of your brand. Um, and a great example, a way you can very quickly see this in action is to look at how your competitors are mentioned and the context they are mentioned in. This is something I do a lot for our content strategy.
Uh our tool brand radar actually lets you look at uh conversations and outputs that reference your competitors but not you. And you can look and decide if this is a context you would like to be associated with. So a really cool example of this I was looking at the brand Fial Raven. Uh again one of my favorite like outdoor brands. They make very expensive clothes but they are pretty good quality. They have really good visibility in LLM outputs for school backpacks as a topic. They are associated with that. People tend to go to them to buy backpacks for their kids.
But Patagonia, one of its biggest competitors, is completely absent from that conversation. If I worked at Patagonia, and if I decided this was a topic I would like our brand to be associated with, we could plug these entity gaps by creating on-site and off-site content about school backpacks. Maybe we create a landing page about different backpacks Patagonia has that are school appropriate. Maybe we do some kind of co-arketing campaign or reach out to our affiliate partners and get them to create promotions about school backpacks, back to school, that kind of thing. Then that is a very good way to guarantee almost guarantee you would appear in these conversations for that topic.
Another very straightforward one, uh avoid too much JavaScript if you can. So, lots of websites rely on JavaScript rendering for the prettier, fancier parts of their website. And although Google and other search engines are generally pretty good at uh dealing with this, AI crawlers are not as good. Uh in fact, many AI crawlers do not render JavaScript at all. So if you have very important parts of your website that are entirely JavaScript, there is a good chance that they will actually be effectively invisible to many large language models and will not get cited or referenced at all in AI conversations.
Um there's a slight caveat to this. I do expect this to change at some point. Uh AI crawlers, AI bots have generally lagged a bit behind uh SEO bots and crawlers, but they are catching up extremely quickly. Uh this year alone we've seen a massive ramping up in the amount of AI bot activity on the internet. Uh back in May almost a quarter of all bot requests on the entire internet came from AI crawlers trying to gobble up the internet. So I suspect their crawlers will get more sophisticated and will be able to handle JavaScript eventually.
But for now avoid putting important things solely in JavaScript because it will not be seen. Uh maybe this is the coolest and trendiest part of this analyzing fanout queries. Fan out queries are so hot right now. Um we did a bit of research recently. We looked at uh say for a given prompt. We wanted to see where the articles that were cited in the conversation that was returned for that prompt actually ranked within Google for that same term. And what we found was only 12% of links cited by ChatGBT, by Gemini, and by Copilot actually also ranked in Google's top 10 results for that prompt.
What gives? Why does that happen? It's because of fanout queries. So if you ask a very long very complicated question to chat GPT and it needs to do a rag search and go out on the internet and find information to answer that question, it does not generally search the question you asked it. Instead, it uses that as foder to generate what are called fanout queries, which are often simpler, more shorttail related uh subtopics that it then goes and run searches for through the Google index, through the Bing index, and brings information back. So in that way, if we can understand what fanout queries might look like, the kind of fanout queries that might be generated for topics we care about, we could theoretically create content uh and rank for those terms and dramatically improve our visibility for those topics.
Um very interestingly, a few weeks ago, uh there was a short window of time where you could actually see the fanout queries that were generated by chat GPT search in the web browser. they were actually showing them in the JSON response which you could see in the developer console. Um, so if somebody asked what are the best cheap sunglasses of 2025, chat GBT was not searching for that question. It was actually searching for men affordable sunglasses review 2025 and best cheap men's sunglasses 2025. So you can see that these are more product focused, they're more specific, they're more bottom of the funnel queries.
Um, actually some LLMs like Plexity still show you their query fan app process. Um, this is something we're trying to add into. If you can work to understand these common formats, then you can create content uh structured in these ways and improve your visibility. Very important caveat with query fan out though at this point in time. Uh, nobody has clickstream data for LLMs. Nobody can see what people are actually searching. So every tool provider, every vendor, everywhere, we are all doing our best to uh estimate and uh synthesize what people might be searching for. But it's important to bear in mind that these are not actual queries that people are searching for.
Anyone that claims to have that does not have that at this moment in time. Uh lastly, avoiding spam. Um, it's very tempting at this moment to just create a bunch of, you know, 20,000word markdown documents on our website for basically every topic imaginable and hope that they get crawled and indexed and referenced by chat GPT. And I am seeing some companies do this as well. Um, this ranges from uh, yeah, very long pages with loads of markdown text, no images, using fragmented writing, just bullet points, just dumping a bunch of AI outputs, and in some cases even trying to hide text in a very kind of 2010 SEO kind of way.
I would caution against this for a few reasons. For one, Google is much less susceptible to this type of gaming. Even if ChatgPT at AL are more susceptible to this, Google is not. It has many years of practice at filtering out this kind of bad content and Google is still by far the most dominant source of traffic for most websites. I also think other AI systems will have to become more sophisticated. They will get better at filtering out this kind of thing because otherwise they will be ruined by SEOs like you and me gaming the system.
And even if you do manage to get these pages cited as many companies have done, um that is not the same as earning new business or earning goodwill or trust or affinity from people that will eventually become your customers. What happens if somebody clicks that citation and finds your really awful formatted AI generated uh imageless page? Nothing good happens from that, right? So we should avoid forgetting about the end goal of this which is trying to earn customers through this visibility. So very quick recap though. If you go away from this and you want to do a few things to improve your LM visibility, try this list.
Build mentions on your top sited domains on user generated content websites and review sites. Have a look at your LLM traffic, the traffic you receive from Chat GPT and other websites. And look at the traffic you receive from organic search and see how they differ. Which pages are more important for LLMs? Pay more attention to those pages. experiment with optimizing some of those really important pages uh using bluff or updating them more regularly or including more entities that kind of thing. Try and create more of the content that gets cited most often. Uh research content and video content are particularly great for this.
Uh have a look to see if there are any regularly uh hallucinated URLs that are receiving lots of clicks and redirect them to a more useful place. Look for any entity gaps in AI outputs, topics that you think you should be associated with but are not, and create content both off-site and on-site to plug those gaps. Don't rely too much on JavaScript for essential content. And have a go at uh playing with and approximating fanout queries for topics you really care about. Uh try and work out what kind of fanout queries people might generate related to your products and create content on those.
And obviously avoid spam and making the web a terrible place. And very quickly uh one last thought to help you kind of process and make all sense of this. It's very easy to get overwhelmed by this stuff. I've found this within myself as well. So many things we should be doing. People telling us, you know, we're losing out if we're not doing these things already. A good way to filter through all of that is to remember that all AI optimization tactics generally fall into one of two categories. Category A are things that are possibly useful in AI search and guaranteed to be useful in traditional search.
So this is creating content on your website about relevant topics. This is getting mentioned on relevant third party websites. This is engaging in Reddit conversations about your brand already very useful even you know AI aside and probably very useful for AI. The other category, category B, these are things that have possible utility in AI search but have no utility in traditional search. Things like publishing an LLM LLM.txt file or chunking your content in very technical ways. These are things that maybe but probably do not uh might help with AI search but have absolutely no benefit to traditional search.
These are very risky things to be doing. There is not always an upside to doing them. Generally, if you have a choice between these two, pick things out of category A. You will not go wrong. There is absolutely no downside to doing that. And that is it. Thank you for listening to me uh rant at you about LLMs for almost 40 minutes. Awesome. How's that everyone? Yeah. And in fact, I felt that it was such an awesome webinar because plenty of people have also asked their questions here. We got a lineup of people. They're asking their questions.
Um, let's go ahead and start. So, I'm not going to um so now we're at the Q&A section. We have quite a number of people that have raised their hands. So, what I'll do is I'll first um help the people that have asked their questions in the Q&A and then we'll do a few people that have raised their hands and then vice versa. So, we can help to clear the the queue here. Let's go with some of the uh initial questions. The first one I think is uh actually hard for me to um find the answer to.
Uh, someone asked, "What do you mean when you say entity?" You see that question here? Yeah, that's a fantastic question. Um, a simple way of putting is a thing, right? It's a discrete self-contained thing. That could be a a company like Patagonia. That could be a product like uh the Nano Puff hoodie that I talked about. Um, anything on the internet that is likely to be described as a noun. it has a name attached to it is one way of thinking about an entity. Another way is thinking about knowledge graphs. So this is uh not anything I know too much about but knowledge graphs are very important way that Google works that LLMs work.
It's basically a spiderweb trying to connect things together where every line between everything um it's a way of grouping things that have relationships. So cat and dog are related entities. Um, so you can actually there are some knowledge graph tools out there that are free, easy to use. You can plug in your company, your brands, and you can see other related entities that are associated with it. And that's a really good way of getting a kind of more visceral sense of what an entity is, what these tools actually think of when we talk about entities.
But basically, anything that has a name attached to it is probably a good thing to think about. That is an entity. Okay. Yeah. So, I hope that helps to answer your question. Um, it was an anonymous question. So, I hope that was helpful. Uh, actually not an easy question to ask because like people just mention it and they don't really explain what an ent entity is. So, that was honestly pretty awesome. Okay. Um, there was another person that asked, "Do LLM use web search to find research um results internally?" I think basically, does do LLM use the web or presumably the live web to find to generate results and responses?
Yeah. So this is quite an interesting topic and this is where I would say again I'm not a data scientist. Um there are from what I understand lots of different types of rag. Lots of different ways these models can do retrieval augmented generation. I think most of them they have access to a cached version of uh whatever search index they use. So uh it sounds like from what I understand they are not going out and looking at say the live Google results right now. they have a cached so a kind of historical version of the Google index which they can then access and you know process themselves in a more efficient way.
So that may sometimes accounts for slight differences you might see according to what is currently appearing in Google and what they may have access to in their search index. They can be different and I know there are different search tools that these LLM use. They all have slightly different methodologies. H it gets quite complicated and quite opaque and yeah not an area of expertise but generally they do all use some form of cacheed search index to uh find pages to use for rag. So awesome. So head on to the next question. I'm going to there are quite a number of people that asked this.
I think you covered it very briefly. What is LLM.txt? Should I be worried? I love this question. Um, so you there's a lot of hype around llm.txt because it is something very simple and it's something that we can control and you know optimizing for LLMs is scary and complicated and we all want something that is simple. We all want to be told that it will help us improve our authority. We can just add this one text file to our page and it'll improve things. It does nothing is the most honest and accurate way I can say that.
So llm.txt txt is a proposed standard. Uh it's basically a way of listing all the crucial resources on your website that you would like an LLM to access and to know about in a format that is friendly and accessible to uh LLM crawlers. A bit of a parallel to robots.txt which we know about which is a text file on your website that provides directions for uh search crawlers that want to find your website. But as I say, this is a proposed standard. this was an idea that somebody's come up with. Lots of people think it's a good idea.
Uh, as last time I looked into this, not a single major creator of any large language model actually uh said they supported it, said they prioritized accessing it, said they uh used the information contained within it. And I've seen some people do some research recently that suggests that it very rarely gets crawled. Not many websites use it. Um, the only thing it has going for it is that it's very easy to create. There are lots of free tools that will create this for you. I personally don't think it's worth bothering with at this point in time, but you know, maybe if chat GBT suddenly says, "Yes, make this.
We really care about it." That would be the sign that you should use it. I think yeah, this was a very common question asked it multiple times, I think, in the Q&A. So, I hope that was a helpful answer. Um, we're now going to go with some raised hands here. Uh some of these people have raised their hands from the very beginning. Um if you are here I'm going to allow you to talk. Um Shahin if you'd like to raise your if you'd like to ask your question. Sure. First thank you again for doing this stuff.
I know that there's not a real benefit for you guys. A lot of benefit for us. um these large language models there's always seem to be a mystery with Alexa and Alexa search or same as like let's say Google search first does Alexa cons considered a an LLM or is it just a stupid response uh you know Q&A uh bot um and uh why is there not enough talk about how to get on an Alexa search? Thank you. It's a great question and uh I don't think I have really anything helpful to say on that.
I've not even considered that point. Um are we talking Alexa in terms of like the Amazon voice search? You know, lots of household devices that people use around the world that Alexa. Hey Alexa, what's the top uh I don't know water filter company in uh San Jose, California? Yeah, that that's a great question. I have no idea what uh behindthe-scenes LLM data source process Alexa uses. Yeah, not sure about that at all. Okay, sounds like you turned somebody's Alexa on. That's pretty aggressive. Thank you. Yeah, sorry about that. But thanks. It's it's a good question to ask.
I actually just just while he while you asked that question, I looked into uh the fact that um they there's like an Alexa plus that got um released in Feb. They say that like a lot of like like a powerful infrastructure of LLM's now powering Alexa. So it's like you know it's smarter, it's more capable. Uh but yeah, I think we don't really have any idea or like the data to like look into how good it is, how much more people are using it, how more effectively they're suggesting brand design. Sorry. No worries. Thank you.
Yeah. Okay. Uh next person, um I guess it would be um Hussein. You're allowed to talk. Hello. Yes, you can ask your question to say. Okay. My question is what's the single biggest change we can expect in SEO with this new technology? That's a great question. I probably I would say long-term I see a bit of a devaluing of on-site content. I kind of briefly alluded to this in the in the presentation. You could go a long way in the past by just publishing a bunch of pages on your website and uh you could rank for lots of terms, have be highly visible for those terms just on the strength of having a domain that was fairly well regarded and having lots of relevant pages.
I think that's not the case today. The way these LLMs work is they they hedge basically for any question they answer. They try and find multiple sources of information. uh something we see a lot you know queries about hrefs often hrefs is not always one of the sources uh referenced because it would prefer to go out and get sources from other websites and pull those together because that is a safer way of answering that question. Um so I think yeah off-site SEO off-site GEO whatever you want to call it is going to be so much more important than it used to be.
um not only just links and relevant pages but uh making sure product comparison sites, forums, they're all active, happy conversations happening about you. I think that's going to be the most important thing you can do. Uh in future where you can focus in SEO or LLM and other tactics. Oh, sorry, I didn't hear the first part of that question. Could you say it again? Uh I'm just saying where I can focus in future in adm tactics or SEO tactics uh just like a Google algorithms. Are you asking if you should focus on either LLM or SEO?
Yes, visibility. I'm just asking if I will on uh like Google algorithms or LM tactics. I think um for any company that is not some big established brand with millions of page views, I think search is still the most important thing to look at for two reasons. One, it is still by far the biggest traffic source, the source of people. LLM traffic is growing, but it is still relatively small to compared to other traffic sources. And also a lot of the things you do to improve search optimization uh creating content trying to build links ensuring the health of your website is technical and upto-date they are all things that have a massive crossover benefit to LLMs.
So you can do the 80% of optimizing for LLMs by just doing good SEO at this point in time. So I would definitely say yes focus on SEO as a starting point. worry about uh LLMs, you know, as you get a bit of extra head space or you want to be a bit experimental and uh try something new. Thank you. Thank you so much. No, pleasure. Um all right, let's uh go ahead and ask answer some of these written uh Q&As's. I see a number of questions related to RA. So, for example, uh Michael's question, would RA include implementing callouts for AI bots in your robots.txt txt file.
Uh, it's good. So, I'm going to defer back to the thing I said earlier where I'm not a data scientist. So, again, please keep all of these responses about technical things uh in mind with a big pinch of salt. Question. Okay. Um, I don't think uh I from what I understand, I don't think uh so AI crawlers use robots.txt in the same way that uh search engine crawlers do. Theoretically, they should respect the directives in there as to whether or not they access parts of your website, whether or not they scrape it or retrieve it for rag.
Um, so I think you could stop rag happening through robots.txt as long as they respect that. Um, I I don't think you can do anything to kind of encourage uh AI bots to access your website through rag through robots.txt, but I think you can certainly block them and prevent them from doing it. There was a bit of um I saw the Cloudflare CEO recently tweeted about perplexity uh how they seem to be not respecting robots.txt requests and there was a bit of debate around there about whether they even should because they are agents and not uh they're acting on users behalfs instead of just being crawlers.
It's a really interesting topic, but I think uh theoretically you can stop rag happening through robots.txt. Okay, so I hope that was a helpful answer because I do see some people multiple people asking about this uh with regards to rag. Um there are people that are asking questions about optimization um of like the on page so HTML uh structured data HTML structure meta tags and things like that um essentially are they different? Are they the same as SEO? Do we have to do anything like differently um for LLM visibility? Yeah, this is a bit of um a slightly contentious topic.
I don't think we do. Um, and even things a lot of people talk about structured data being really important for LLMs. Uh, some people disagree with that. I think what generally happens when uh an LLM retrieves your page and uses it as part of rag or as part of the training data process, it basically pulls all of the page text into one place and uh it treats it like as a text source. It does not prioritize uh you know onpage elements in the way that SEO does. It is all pulled together and analyzed as one long block of text.
I generally think there is nothing unique that you have to do to improve on page optimization for LLMs because a lot of the things we do to improve structure and improve how websites uh present text and make it easier to be accessed by people and by search engines does not apply to LLM because they basically just pull it all in. They chunk it. they vectorize it and they use that as part of another process. Um, I will say we just published an article that looks at this in some more detail, far more detail than I could understand or go into, um, about uh, chunking and whether chunking is a good idea because this is kind of uh, the epitome of this conversation.
Lots of people think you should chunk your text to make it more accessible. We talked to our machine learning team about what that would actually mean and it doesn't make a whole lot of sense for people, you know, website owners, content writers, SEOs to actually attempt that process. Um, so we have a great article about that. I'd recommend checking it out. Otherwise, I don't think there's much more you can do. Give me one sec here. There is a long people are pouring out with their questions. is quite amazing. These are so technical. Give me an easy one, guys.
Come on. Nice uh nice easy question. Uh sorry, I only have one other like I mean I they continue to be kind of like difficult questions here. Uh people have been asking about videos. Um some of them have asked like you know do videos affect LM visibility and uh if so are there particular types of videos like formats um that would potentially be more beneficial than than others? I don't know about formats. Certainly what we have seen is that YouTube is a very commonly cited website. Um I think if you think about the first principles, why does an LLM recommend a website to be clicked upon?
It's generally because they think it contains something that the LLM cannot deliver itself. Videos are a great example of that. You know, it's a different way of presenting information. you know, an hourong webinar or a 10-minute video clip or a Tik Tok short, they are a different experience to what an LLM can provide. So, they are incentivized to link out to it. So, we also know that they are trained upon and because YouTube uh has public transcripts available, uh I'm pretty sure YouTube and YouTube transcripts will be an important data source used within LLMs as well.
So I think it certainly is a huge benefit. Not in the sense that there are particular formats that will perform well within LLM, just that you being visible and referenced in many places on YouTube will have a big uh boost to how the LMS understand you, how likely it is to recommend you. Um maybe I can tease this actually, but as part of Brand Radar, we are in the process of adding a YouTube index. So you will actually be able to search through a massive store of YouTube data to see how often your brand, your entities are mentioned, how often your competitors are mentioned, and you'll be able to, yeah, track the performance, your ability to create, you know, better visibility on YouTube and better visibility in LLMs as a result.
Um, so I think yeah, if you're equipped to make video content, it's great for tons of reasons. It is also very good for LLM visibility. Okay. So, don't stop with the videos. We certainly have not stopped with our videos. In fact, we have more and more video content being pushed out here at HFS. Um, but mainly also to serve you, the customer or the or you know the marketer trying to figure out um how to improve your brand's visibility and success. So um there is a person that is asking more about what you mean by fan fan out queries.
What is it actually and and like how do we how do we go about it? I know you touched on it in one of the slides. Maybe you can like give back a quick overview. Yeah, sure. So if you ask chat GPT a question. Um let's say I want to ask you know what is the best wild camping location for me to go hiking next month? Um, I want to stay somewhere that has woodland and a lake nearby. That's the type of like really long uh meandering quite often completely unique question that gets asked of LLMs because we use them in a very conversational way.
Now, at some point, the LLM will need to do rag for that query, right? If it doesn't have enough to answer that question within its data set or it wants to get something fresh and relevant, it will go out to its search index and find content that is relevant to that and bring it back and include that in the response. But how does it do that? That is the crux of this. Um, obviously my first thought was does it just Google search for that exact really long query? It does not as it turns out. Instead, what it does is it uses your query, your really long conversational one.
It thinks about it and it says, "What are some simpler related topics that will allow me to answer this question that I can run searches for, get information, recombine and give you the final answer." And the topics it generates are the query fanouts. You are fanning out from the original query. So that might be something like best wild camping near Hadenham, UK 2025, which is where I live. That's a different question to what was asked, but the LLM believes that the resources it will find by asking that fan out will allow it to answer the original query in a much better way.
That in a nutshell is query fan out. Um, I think understanding how that process works is how we understand which keywords we should be optimizing for because that is how these tools access and interface search indexes like Google. Um yeah, it's a very interesting topic. I think it's um I find it very interesting. I'm trying to learn more about it myself at the moment. Okay. Uh and we will quickly um answer some of these raised hands here. U I will focus more on people that have not been able to speak before. Uh give me one second here.
I will allow uh Schwet Ry you are able to will you can ask your question. Yeah. So actually I'm working on the AU and GU optimization of our content. So how is it different from LLM? Like AU and JU what is the best process to optimize our content for LLM? Is there any specific point to be mentioned in the content like any real time data or something else? What was the um the first thing? There's an acronym. It sounded like you said you're doing some kind of optimization for your content. I didn't quite hear what that was.
I'm working on the AU and GU optimization. Content optimization. AEO GEO. Ah, got you. Yes. are kind of the same thing. Yeah. Yeah. So, is there any best pro is there any specific process to optimize LLM? Yeah, it's a good question. So, there's always a fine line between this. It's very easy to overoptimize and create slightly spammy content. But some things that do seem to work, again, I touched on some of these. Um so FAQ sections do seem to work very well because um it's likely that a lot of questions asked of LLMs, a lot of query fanouts will be answering questions.
So if you can preempt common questions around your topic and include very simple question answer couplets um that is traditionally that worked very well for SEO and I think that still also works very well for uh LLM optimization as well because you are preempting the question you are giving a very succinct declarative easy to access answer. I think that is a great thing to do. Obviously you can overdo that and include far too many of those things. Otherwise, writing very simply, this bluff idea. Um, so every time you have a new header or a new section, trying to articulate your key point in the simplest, shortest, most direct way possible, right at the start of the paragraph, I think is absolutely worth trying.
Um, yeah, good question. Yeah, beyond that, I think I we actually in the process of uh rewriting our guide to onpage SEO uh to make it more relevant for the AI era. I don't think many things have changed. Uh as much as we would like to have small technical things we can optimize for and see a big boost to visibility, I don't think it works that way because of how LLMs uh interpret web pages. Thanks. I hope that helps answer your question. I think fundamentally we also must remember that like like there's one step which is that LLMs recommend and mention your brand but ideally after that a human will interact with your brand and your website.
So we don't want to go too far. Sometimes we have to make sure that we cater to the humans that you know interact with your business. Uh I hope that was helpful and uh we'll get one more person here that have raised their hand. maybe Prashant you are up you you may unmute yourself and ask your question. Yeah. Can you hear me constant? Yeah we can hear you. Okay great. Thank you for the opportunity. It's been like while I raised my hand and it started paining. Okay. Uh Ran my question to you is uh about the AI generated content.
So it's been like you know while there are a lot of tools um through which people are people even brands are generating the content right. So I wanted to understand how does AIO right in the back end process this AI generated content as in does it uh give any citation does it refer the AI generated content while answering the user questions or queries that is an awesome question obviously I can't speak to mechanically what exactly goes on in AIO's but I do have some theories here um actually I briefly mentioned this, but something uh Bernard Hang at Clear Scope showed me.
Uh a lot of AI models do seem to have a preference for AI generated text in their responses when they actually seek out content for the simple reason that a lot of AI generated text tends to be very entity rich. Um they look for very clear sentences that have lots of entities in them with very clear relationships between those entities because fundamentally that is how these tools work. that is how they understand questions and answers and work out which answers should be paired with which questions. So I do genuinely I have a hunch that AI generated content may have be preferred by AIO's and other uh large language models because uh it's easier for them to pass.
It's more entity rich. It's in a format they would expect it to be because it's created by these models. You know they're almost recognizing their own output. The caveat is um although I think lots of websites are publishing lots of very good AI content and it's ranking very well and you know no problems there. There are also lots of websites that are publishing lots of AI generated content that is not ranking. Um I have a hunch that there is some kind of additional processing happening on Google's end. You know maybe if a website publishes a thousand pages in a day or a week it triggers some additional algorithmic filter.
uh they do some AI detection and if they see very high probabilities that all those pages were AI generated maybe they choose not to rank them. Um so I think if you're able to get AI generated content into a good a top spot you know top 10 which is where most of these AIOs pull their source information from I think you probably got a very good chance of um ranking for them. There's actually I think Steve Toth made something called snippet brain. Uh, originally it was for featured snippets, but it was a way of reformatting your text to be uh, more desirable and more likely to be quoted by featured snippets.
I suspect something similar will work quite well with AI overviews as well. Great. Thank you, Ran. It does answer my question. Yeah. Okay. Uh up next um we have uh one more hand here. Uh Azim, you can uh go ahead to speak. Azim. Yes. Yes. How are you? Uh how we're doing good? Yes, I'm good. I have a question to you. Last time I optimized uh my client website when uh someone search on chat GPG top ABC company. So our website is ranked uh on chat GP number one. when my uh client search on get GP in same time but there no appear.
I I also check from different accounts because sometime I uh search from help from certain name then they will show. So I search from another account. Uh uh there this uh company is showing on my side but when my client search it does not appear on CAD. What is this? Yeah, you probably looked at these yourself but there's a few things that could be going on. Um so one thing is this idea of temperature. Um if you ask the same question to the same LLM, you will quite often get different responses back. It's very possible that you could ask the same question and get a different set of brands mentioned uh even though it's the exact same question because there is an element of uh randomness in LLM outputs.
It's what's referred to as temperature and you can change the temperature on the back end to make things more creative and uh different with each response or more similar to each response. Um so even the exact same query asked a 100 times you are not going to get one answer. you're probably going to get, you know, 50 or 70 or even a hundred slightly minutely different answers. Maybe they'll include your brand, maybe they won't. Um, you've also got things like personalization and memory. Uh, you probably already accounted for this, but um, each chat GPT instance is personalized to the user.
It can memorize things you've done in the past. It can memorize companies you've talked about and be more likely to then reference those in future conversations you have. and it's not always clear when it's doing that. Um, so yeah, beyond that, I don't know why that would happen, but there are definitely some mechanisms that would make that happen. And it can be quite frustrating when you're trying to prove that you've, you know, optimized and you've made uh an improvement to a company's visibility. Yes. Uh maybe it will take some time because we did some work and many things in uh scenario.
So, thank you very much for answering my question. Oh, pleasure. Okay. Uh, okay. We still have a ton of questions and like it's it's quite amazing that you guys have so many questions uh regarding um LM visibility. It definitely is a very uh hot topic. I we will not be able to answer everyone's questions here, but I have grouped some questions together in hopes that we can help um answer as many as we can, but we'll answer maybe two or three more max. Okay, I hope that is okay with you guys. Um there were a lot of questions asking for I'm a B2B, uh I'm a charity, I'm a small brand, I'm a local player.
Um what can we realistically do uh because you know like to to shop for LLMs? um when we're trying to fight against the giants, the big brands, I think the advice is probably the same as I'd say in terms of SEO. You know, it's uh you have to be very specific and pick one small area in which to compete and focus all of your limited resources. Um so I think even a very small, very unknown brand could still develop a lot of kind of topical authority and get referenced a lot for one topic if they write loads about it.
if they if every kind of reference to them on the internet refers back to that topic, that subtopic, that niche idea. So, I would pick one thing you want to be known about. Great example of this, um, I'm a trustee at a charity in the UK. Um, very tiny. We've got no resources and we want to, you know, develop SEO and rank in LMS and all that kind of thing. Uh, of all the things we could write about, I've picked one topic, uh, which is DBS checking for charities. It's the most niche, obscure topic you could imagine, but all of our resources are going to go into creating content on that topic.
Um, in answering questions on Reddit and other forums about that topic, uh, with a view to hopefully appearing for that very small but very relevant subtopic. Um, that's very much what I would do in that situation, right? Pick one area you care a lot about that is very relevant to your product. create content on it, engage in conversations with it in relevant places on the internet and uh focus all your energy in one place. Awesome. Um yeah, so someone is asking about social networks reviews posting comments. So I think like there's a general question about like off- page uh mentions here.
Some are some people have asked questions about like um straight up PR with like OpenAI, Wall Street Journal, um different news um websites. I think the question is how do you like how much of that has an impact compared to normal SEO or is it or for people who are venturing into PR for the first time? I think like how do how can they go about it because uh presumably it's going to be a lot more competitive as people start to realize the like power of off- page. So what are your general thoughts about that if you can share?
I think it's very easy to spend a lot of money on PR and get uh very few results back from it. So it's a hard thing to do very well especially for a small company. I think there are lots of free things you can do that would serve much of the same purpose that you would not have to pay lots of money for. Um, yeah, Reddit is a a really great one. Um, I know lots of people are they're trying to do Reddit marketing and they want to talk about their products and they're responding to every relevant question with a link to their product.
Terrible idea. That will not work. That will get you banned from a bunch of subreddits. A much better thing to do is just find communities, forums, spaces where your target customers talk about the things your business helps with and engage in those conversations. Uh, you don't even have to talk about your product in the short run. um view it almost as like a kind of you know customer research exercise. Share helpful content, build a good rep in that community and the byproduct of that is eventually that is something that will ladder its way up into better visibility in LLMs as you have more of these conversations.
as you get more wellknown. You could start your own subreddit which is uh I know Patrick on our team did a long time ago way before LLMs but he created a space uh for tech SEO discussions that has knock- on benefit today because that's a community of people who generally feel favorably towards him and towards HFS. uh that that's the kind of way of approaching this I think not paying money and getting some links in places but more where can we go and engage with actual potential customers in the communities that they value and as a result of doing that benefit from better LLM visibility as well awesome uh just going through the list here to see like hopefully we've covered the gist of what people are asking for I'm think yeah small business I hope this answer charities um kind of content in in that same line like while we're talking about um PR uh the kind of content like from top of funnel bottom of funnel middleunnel uh what would matter more in regards to these off- page mentions someone's question yeah yeah good question.
Um, a couple of ideas related to this. I suppose something uh I think getting links and mentions in irrelevant places is not going to help much with LLMs. Um, you know, in the past you could get a backlink from some completely random news company um and it would be DRA and that would actually have a very decent impact on your visibility. I don't think it works the same way because uh the way LLMs work is they're trying to understand the entity relationships. They're looking at you know it's understanding of the website and the context within which your brand is mentioned.
So if you build links and build mentions on irrelevant websites that have no clear topical focus that are not related to the things that your company does, I suspect that's not going to have much of an impact on uh the LM's understanding of your brand. much better, I think, to actually try and build links and mentions on smaller websites that are more relevant to your niche, to your industry, that kind of thing. Um, in terms of the funnel stages, so something I've been thinking a lot about for HFS, we need to do more bottom offunnel content, more direct uh product comparisons.
It's something we've historically shied away from because uh, you know, we trust people to make their own decisions. We do the best job we can. We educate and we help and we'll let other people find their own way to us. But it does seem that versus pages, competitive comparison pages, top product listicles, they get cited and referenced an awful lot in AI assistance. Um, so I think trying to get yourself included in top 10 product roundups on a bunch of different websites is probably a very useful thing to do at this point in time. Okay, I hope that was helpful for the people that have asked questions about kinds of content.
Sorry. Uh I think Hussein, you're you're not muted. Uh okay, there are still quite a number of questions here, but we are out of time. I have a last question about uh how should uh we change our content strategy to make it more suitable for LMS. Sorry, is this Hussein? Yes. I'm just asking how should we change our content strategy to make it more suitable for LMS? Yeah, it's a very good question. I'm obviously thinking about that a lot because I run the content strategy for the HFS blog. Um, you can kind of see how we're thinking about this by looking at our blog.
We do a lot more uh original research and um trying to become the source of information instead of doing, you know, traditional uh SEO topics that may have been covered a thousand times before. That's a big change we're making. um LLMs like to go out and find, you know, original information to corroborate their claims and we're getting a lot of citations and visibility from that research as well as being good for social media and newsletters and all this kind of thing. Um we are trying to do more uh direct competitor content like comparisons with similar tools as well just so to make sure we're included in these lists, we are associated with these other brands.
Um and we're also um we are creating content on topics that may not have much search volume now but we think are very important to uh LLMs and may have more visibility in the future. So we're writing a lot more about uh you know LLM visibility uh co-mentions and citations all things that don't have much volume now but we think will be important. So taking speculative bets on uh the content that people will be asking questions about in the future as well. Thank you. Thank you so much for giving this wonderful opportunity. Thank you. Our pleasure.
I think I may have to write some blog posts on the back of these questions. There's some really good uh good questions in here. There's a I I think let's do I'm trying to see like what questions are kind of like areas that we've not answered just yet. Uh wonderful number of questions. I think we have covered like like the equivalent of like 20 questions already which is like amazing. Uh okay let's let's allow some people to to ask their questions uh in person. Uh Ranja I think you have raised your hand for some time.
If you are here you can ask your question Ranjita. Okay. Um maybe we will let uh Shahin ask his question first. So Shahin you can go ahead and ask a question. Thank you so much. uh you answered actually one of the two I have about the value of backlinks right now. Is that still viable? And the DR and then um 10 years ago all over the internet there were these you know give us some money we'll post uh all your links on uh you know the relevant data on these back links and so forth. Is there still a company, a reputable company that you can submit your uh information to and they post for you or they, you know, they submit to these directories?
Yeah, I can't recommend one like individually. These services do still exist and I've used things through. Please do because that's what I'm looking for. Yeah, if I had one to recommend, I would. Yeah, unfortunately I don't. Okay. Thank you so much again and I'll say goodbye here. Thanks again. Good luck. Thank you Shahim. Uh Ranjita we will move on. Maybe you can ask a question later when you're back. Uh if we have time because we are running over an hour here. Um there are some people that asked questions about so I think we we're like I think this we will make it the last question.
The rest of them uh don't worry if you've asked your questions I will um answer them after the webinar is over offline with your team. Uh so um if we don't get to you today, I'm sorry, but there were a lot of really really good answers today. It's just uh how many we're able to get to in today's webinar. Um there were questions about uh the whole thing about oh we should block our content from LLM's crawling it. Um should we keep them gated? Uh should we use them? How would you recommend this problem of AI using exploiting the content you create versus having visibility of your brand?
I think it depends a lot on uh your business model. There are some companies where having an AI crawl their pages and you know take their information and present it in a chat interface instead of on the website it could actually destroy the business. Right? you're talking media publications or and some companies that create original data and research. Um, if you're one of those companies, yes, I think you have to try a different approach to this. Um, I've seen some cool experiments. Actually, Louise on our blog team wrote a post about this, uh, about how some companies are evolving their SEO strategy in the wake of AI.
Um, a company called Get Lata by Nathan Lacer. It's like a kind of SAS data and research website. he made the decision to completely block all search callers from his website because the information they were extracting is stuff he charges money for. If he lets them do that, then uh he has no business. Um even HubSpot with ran an experiment recently where they uh chose to include some gates on their content. So, for some particularly important blog post, scroll down and, you know, an oldfashioned pop-up appears and says, you know, log in or uh create an account, give your email to read the rest of this.
I think we're going to see a resurgence of some slightly old school kind of content gating tactics because, yeah, if you have a ton of value to offer and you don't want LLMs to scrape it, that's basically the only way around that. Um, otherwise I think if it's not information that is critical to your business, I would be very reluctant to gate and bar LLMs because I think a lot of interest is going to come from people using these tools. Um, what we're seeing now is the tip of the iceberg. I think more and more people will use AI in some form to access your website and your content.
And uh, Ranjita, it looks like you are back. So I will ask you to unmute and see if that works. can ask a question. Are you able to Looks like you're unmuted already. Uh you may have a setting in your Zoom that requires you to connect your mic. Okay. Um I think Ranjita, it would be best if you can uh follow up your question later on. So there'll be a survey at the end of this webinar when you close it and it'll be helpful to ask your question there and then we'll pick it up and help to answer you if we don't get it today.
Uh don't get to it today. Um let's go ahead and end the webinar. I think most of us uh most of us looks like we're heading out. Uh thank you so much everyone. Um and thank you so much for your questions and your attendance here today. We're so happy that you guys joined us today's webinar. We hope we found it useful. Uh and uh please help to fill out the survey at the end of this webinar. helps us figure out what you guys like, what we can improve for future webinars. Um, and uh, that's it.
Yeah. And for those who have asked the questions we have not gotten to, uh, don't worry, we will help to get to them uh, at the end uh, like after this webinar. So, we'll help to answer these questions offline to you. Okay? So, we'll help to get to them. So, don't worry, we got you. And the really good questions will become blog posts probably. So, thanks for filling out my content calendar. Yeah. has been very useful in that uh regard as well. Okay, awesome. Uh take care. Uh we will see you guys in the next one.
See you. Bye. Bye. Bye.
More from Ahrefs Tutorials
Get daily recaps from
Ahrefs Tutorials
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.








