Privacy in the AI Age: What's Really Changing in 2026 (with Cloudflare's CPO)

Cloudflare| 00:41:35|Mar 26, 2026
Chapters18
The chapter highlights recent Cloudflare blog posts on improving global upload performance with R2 local uploads and on building vertical micro front ends on Cler's platform, including service bindings for multi-project routing, with a tease for a Molt Worker AI agent episode.

Cloudflare’s privacy chief maps 2026 AI risks and guardrails, stressing data minimization, governance, and human-in-the-loop responsibility.

Summary

Cloudflare’s Chief Privacy Officer, Hily Hanok, explains how privacy programs have evolved with AI over the past eight years and why data governance remains foundational. Emily (the internal privacy voice) highlights that Cloudflare has always prioritized minimal personal data, clear retention policies, and transparent practices, now extended to AI-enabled tooling. The discussion covers how AI increases the need for responsible data use, privacy-by-design, and robust incident response. Hanok details government data requests, warrant canaries, and Cloudflare’s stance on due process and notice when possible. The episode also dives into digital sovereignty, data localization options, and certifications that ease international data transfers. Finally, the conversation touches practical AI usage and risk mitigation—from internal development policies to customer-facing AI safeguards and the importance of a human-in-the-loop approach. The overall message: protect privacy by design, stay adaptable to regulatory climates, and keep the human responsible for AI outputs. Cloudflare’s trust hub and AI principles anchor these commitments for 2026 and beyond.

Key Takeaways

  • Cloudflare minimizes personal data collection on its network, uses strict retention limits, and maintains transparency about data handling.
  • The company enforces privacy-by-design in product teams and uses a privacy counseling function to embed protections in engineering.
  • Warrant canaries appear in Cloudflare’s transparency report to signal when the company would resist or disclose government data requests.
  • Data localization options (e.g., EU data localization and metadata boundaries) give customers control over where data is inspected and stored.
  • Internal AI use is restricted to approved tools; customer data is not used to train Cloudflare’s AI systems, backed by AI principles and a public trust hub.
  • Privacy officers now juggle GDPR-era duties with responsible AI governance, cybersecurity coordination, and vendor risk management.

Who Is This For?

Tech leaders, privacy officers, and developers at organizations deploying AI in regulated environments who want a practical view of how to balance AI innovation with privacy and sovereignty commitments.

Notable Quotes

""We don't collect a lot of personal data on our network... and we design our products to collect the minimum amount of data that we need.""
Hanok explains Cloudflare’s data minimization principle and its impact on privacy posture.
""We train teams to review how AI is used, ensure customer data isn’t used to train models, and push back when governing bodies demand overbroad access.""
Illustrates Cloudflare’s stance on AI data use and government data requests.
""A human in the loop is incredibly important... AI is like an intern that you still need to check for accuracy and bias.""
Highlights the need for oversight when deploying AI solutions.
""Warrant canaries in our transparency report tell you what we won’t do, and if we must, we remove that promise to signal a data request occurred.""
Describes a concrete transparency mechanism for government data requests.

Questions This Video Answers

  • how does Cloudflare handle government data requests and transparency?
  • what is privacy by design and how does Cloudflare implement it for AI tools?
  • what data localization options does Cloudflare offer for EU data transfers?
  • how can I safely use AI tools without leaking personal data?
  • what are warrant canaries and how do they work in practice with tech companies?
CloudflarePrivacy by DesignChief Privacy OfficerData Protection OfficerAI and PrivacyData LocalizationTransparency ReportWarrant CanariesDigital SovereigntyAI Governance
Full Transcript
a lot of things that we kind of dealt with around this privacy of personal data that have been applied to AI and so in some ways we've always been an AI company. It's just not the like chat GPT generative AI that everybody's thinking of now. We've always had machine learning. We've always been doing a lot of things to train and detect against threats. All of that is is still happening. As these AI capabilities have expanded, we've been scaling up our instructions, our principles, our data use requirements and and controls. All of those things are have been scaling to deal with AI. Hello everyone and welcome to this week in net is the February the 6th 2026 edition. We're already in the second month of 2026. This week we're going to talk about privacy in 2026 but also 2025. Also understanding a bit of the evolution in the past few years with AI at helm for sure but also digital sovereignty. many things we're going to discuss. So last week it was privacy week and privacy day. So, we're actually using that as a way to just jump into privacy. And for that, I have Hily Hanok, our chief privacy officer. As usual in our show, when we need anything about privacy, we go to Emily, of course. I'm your host, Ronto, based in Lisbon, Portugal. A very rainy Lisbon, Portugal. Still with a new storm coming. Actually, if you check Coffler radar site and also our uh Twitter or X account, you can see how for almost 7 days now, it has been impacting has been impacted by the storm and power outages in at least two regions in Portugal. Let's do a small detour through our blog. This week, uh earlier this week, we had a blog post about improving global upload performance with R2 local upload. So that's about our storage uh product. Uh so uh R2 reduces now uh with local uploads u request duration for uploads by up to 75%. So it writes object data to a nearby location and asynchronously copies it to your bucket. So all while data is available immediately. This is about your builders out there. Also in the Coffler workers developer platform perspective, we had a blog post on Friday about building vertical micro front ends on Cler's platform. So this is all all about deploying multiple workers under a single domain with the ability to make them feel like single page applications. So in this blog we take a look at how service bindings enable URL path routing to multiple projects. And of course, next week we are aiming to have uh an episode about Molt Worker, which is the self-host personal agent, AI agent, or even assistant without the need of an Apple Mac Mini. Without further ado, here's my conversation with Hamley Hancock. Hello, Hamil. How are you? Hi, good to see you. Yeah, very exciting coming off of Privacy Week last week. That's true. That's true. We actually did an internal conversation. So, this is actually uh a replay in a sense. For those who don't know, where are you based? I don't think you're there now, but no, not I'm not there currently, but I'm usually based in San Francisco. Exactly. In terms of this is something that comes up uh frequently. Just give folks an example. We had a few episodes with you already uh in the past. one for example in 2024 about privacy Europeans's GDPR digital services act for example so we took care of that uh in that episode in um February of 2024 actually people can search that one but also last year we did one with Rory from our compliance team about very specific privacy certifications and why certifications and also compliance is relevant for everyone even those that don't think it's relevant for them. So we already discussed some topics here. The idea here is to have a general perspective of 2026 and this AI age as well. Let's start with this perspective of trying to make people understand a bit of what does a chief privacy officer do really. You're the first chief privacy officer of Cloudflare, right? Yeah, I am. I am. Well, it's interesting. I mean I started at Cloudflare it'll be eight years in April and coming in I actually started as a data protection officer which is a title I also still hold as as in addition to chief privacy officer because at the time it it was really about focusing on GDPR was I started about a month before GDPR the European general data protection regulation for those who may not know I started about a month before that went into effect and at the time a lot of companies like ours were putting in plays sort of a lead privacy role and GDPR actually requires companies certain companies if they meet certain standards to have a data protection officer. So a number of companies whether they met those standards or not were establishing this idea of a chief privacy officer or data protection officer or somebody who was kind of head of the privacy organization. And so when I started at Cloudflare, the idea was to have somebody who would build up a privacy program. And that means everything from making sure you have a good privacy policy in place, making sure that you have the ability to respond to data subject access requests, documenting a lot of your privacy practices with privacy impact assessments, and then under the GDPR, there's a specific type of impact assessment you're supposed to do called a data protection impact assessment, which is basically the idea of figuring out whether the products or services you're offering ing what the privacy impact is on the individuals using those products and services. So if you have to give some personal information or information is collected from you in order to use those products and services. The idea is you're supposed to weigh whether there's any kind of privacy harm when you collect that information or maybe any privacy harm might be outweighed by the benefit. Right? If you're going to buy some shoes, you know, and online, you want those shoes. So, you're going to have to give some personal information in order to be able to purchase the shoes and have them shipped to you. That's that's a pretty good trade-off. An example in the other way might be there was a famous case where there was a flashlight app that you could put on your phone and it was collection collecting I think location information. There's no need to collect location information when all you're doing is just trying to use your phone as a flashlight. So that would be an example of of a case when the impact on privacy is greater than it should be for the purposes for the data collection. That that's um that was kind of what the big focus was 8 years ago. And what a lot of chief privacy officers were really focusing on was standing up programs that would do that kind of documentation. Also training, making sure people, you know, knew how to handle personal data. And what's been really interesting is in the last 8 years, the chief privacy officer role has really evolved into something kind of different. Now, not everywhere, but in a lot of places, those same kind of muscle memory things that we developed as privacy officers, doing the data protection impact assessments, writing, you know, privacy policies, training people, talking to product teams about how to use personal data appropriately, privacy by design principles, figuring out how to retain data, understanding where your data is. All of these kinds of things are things that were started by data protection/privacy legislation and that's what privacy officers focused on. But all of those things have been increasingly important for AI. And so all of these different ways that we're using data now to do more machine learning for generative AI all the AI enabled tooling that companies are building. All of these things really rely on, I think, the principles and foundations that are data protection principles and foundations because what customers are are worried about, regulators are worried about is what what companies are worried about is, you know, is our data going to go into this like AI machine in the in the sky, wherever that is, and is that going to be used in a way that invades someone's privacy? Is that going to be my data gets sucked up into this machine and I can never get it out? So, a lot of those privacy concerns are also present with AI. So, what what I'm finding now when I talk to a lot of the chief privacy officers is we all have kind of this expanded function where we're thinking about governance not just in strict GDPR terms or strict data protection law terms because there's data protection laws in I think it's like 200 countries. the United States, almost every state has some kind of privacy law. There there is that focus still and that is still important and it's still a thing that we we continue to focus on. But a number of us are also figuring out how to do responsible AI. We're writing policies around how to use data with AI. We're doing training. Um this is part partly required by the AI act is you have to do training to make sure there's AI literacy. So a lot of those same kind of documentation steps, those same responsible data use steps, the impact of using AI on the personal data, all of those kinds of things were also are also relevant for this sort of new technology we're dealing with. And then a lot of privacy officers now like myself also are dealing with cyber security and and that means what are we doing with personal data in the event of a breach? what are our processes for notifying people? What are our processes for making sure maybe no breaches don't happen? Really, that's what you want. Um, and so a lot of people are also doing that. So, the chief privacy officer role now compared to what it was 8 years ago is is kind of this bigger sprawling thing around data governance generally that and it touches on a bunch of different disciplines. And so we're also seeing people with titles like digital responsibility officer or you know trust trust officer and and so these titles are kind of changing a little bit but privacy officer is still it's all it's still all about data and we're figuring out about data and in some cases it's even not even just about personal data anymore because we've got the EU data act which regulates non-personal data. So, um, so yeah, it ends up being kind of this big big job dealing with a lot of different things, but at its core, it's really all about making sure data is being used responsibly, no matter what the technology is. You you mentioned there a lot. One of the things I'm also always surprised is how people, may that be a builder, may that be the average user, not always. Many times they don't understand how important privacy is in in what they do. They only think about that when something goes wrong. And you mentioned that something that we mentioned in previous episodes as well. That is after you do what you do after a breach, after an issue, after a problem, the resilience, the steps you have in place will determine the impact of an issue of a problem. Right? So having procedures and steps in place for first trying to avoid bad things happen but also if something bad happens trying to limit it where it happens to not infect the other places but also having good measures in place of plan B plan C what we do if this happen what we do if that happened that entails a lot of discussions with different teams right even at Cthur yes yeah we work really closely with the security team the IT team somewhat the people team, um the customer support team. So, all of these different teams and sales, anybody really who's touching customer data. I will say one of the things that we're really fortunate about at Cloudflare is I think because our mission is to help build a better internet. And I think so many of our employees who join really firmly believe that mission even maybe they never articulated that way in their head, but when when they join, they're here because they believe in that mission. and and so we actually have a really privacy focused culture that has a lot to do just with the mindset of the people coming in and and not because it's imposed top down. It I mean it's it comes from both directions, right? The the leadership of the company is very concerned about making sure we're protecting data. We're a cyber security company after all. So we want that's that's our their core. But I think people who join Cloudflare are also really really careful and and sometimes I've run into engineers who have locked data down more than they really need to in some cases. So I think we have that culture and so I think we're really fortunate here that we don't have a lot of people who are really trying to push the envelope in in big ways or who are kind of careless with data. I think we have a very security and privacy focused culture that that helps make my job easier in that respect. But yeah, and then working a lot with the security team. I mean, we've developed a number of playbooks because you always want to sort of prepare for the and then, you know, do everything you can to make sure the worst doesn't happen. And and so we do figure out how how are we looking at vendors? How are we trying to be sure that the vendors aren't going to be a source of vulnerability for us? How are we going to protect against you know outside bad actors? You know what if there are bad actors inside? How are we going to protect against that? what kind of controls and alerts and auditing and um you know all that detection capability we we talk a lot about with the security team about how to figure out like what are the right signals and detections that we want to have in place and then you know yeah should should something happen how are we going to lock things down how are we going to protect against it's kind of like having fire doors in a in a building you know you okay let's say the fire gets past is in one area and that fire door sort of slows the spread or maybe it allows you to contain the the damage in one area and and keep everything else safe. And so there's a lot of work to figure out how to put those kinds of protections in place too. Makes makes sense. One of the things we typically mention as well is the privacy by design perspective uh in terms of building something putting something out there, the data protection perspectives. What can we say about how Culper actually deals with the amount of traffic metadata every day? How we try to to do this really? Yeah, I mean the Yeah, there's a lot of lot of traffic that crosses our pipes. I mean, I think the big thing is, you know, first of all, it goes back to what Cloudflare is about and how we're different maybe from some of these other big big technology companies is which is that, you know, we are a cyber security company. Our business strategy is not about tracking and monetizing end user data on our network in order to profile them for ads. So, so that's kind of the first principle is that we we know we know what we you know where our lines are in terms of what we will and won't do with data. And then we don't collect a lot of personal data. Now, we have storage products and I don't know what puts people put in their storage projects. So, so maybe there's personal data there. Touch that. But in terms of the data that crosses our network when we sit as a reverse proxy or when we are dealing with a corporate you know our our Cloudflare one corporate network protections there's not a lot of personal data really there's there's IP addresses which in some cases um I know there are some regulators who believe IP addresses are personal data but in many cases we have nothing to tie the IP address to so it's kind of a meaningless piece of data for us um not meaningless I shouldn't say that I I mean it we can it helps us understand like where in the world somebody is or or if they're using u what kind of what kind of ISP they're using but we can't say oh that's that's Emily's IP address. We don't have that kind of capability. And that's one of the the big privacy bis and by design principles too is is that you want to make sure you're collecting the minimum amount of personal data necessary and you you know we're we try to design excuse me we try to design our products to collect the minimum amount of data that we need. We don't have a lot of personal data and then the other big principle is make sure that your retention is only as long as you really need it for business purposes. So, we've put a lot of effort into making sure that that's true and and not just holding vast amounts of data for as long as possible. Obviously, there's then the security life cycle and making sure that we've secured the data along the way and um and then transparency, trying to be as open as we can with our customers and with our end users about what data it is that we have, what we're doing with it, and how long we keep it. And so, those are really the principles that we try to embed everywhere. Uh yeah and that's you know so that's why we have a product counseling team that works a lot with our product engineering teams to embed those privacy by design principles and as a former tech journalist one of the things that I was able to see up close in many situations was the struggle between it's helpful to have data for sure for many reasons for AI for all sorts of reasons is very helpful for the person the actual user for the internet for us to block attacks for many purposes useful to have data but it's also useful for that data to be protected right to don't have others seeing it or encrypted data in many many situations the encryption is really important but again even in a clair scale having the data also helps like blocking DOS attacks and where is this where are the user for us to monitor what what is happening and us diverting attacks in many situations so having data is actually helpful for cyber security for many reasons. Having the less amount of data possible, don't use that the data for any advertisement or sales or something like that. That's also relevant in in this situation, right? Yeah. I mean, yeah. So, so when you're saying like so bot management is is a really good example of that. I think bot management, our products and services there are designed to protect against bad bots. And I mean now in the AI age, there there are good bots. There are a lot more good bots now with AI agents, but in general, it's this idea of making sure that you allow in the the human traffic and you block the bad. And you don't need a lot of personal data for that. There's there's a lot of signals about whether the traffic is valid traffic or is human traffic versus bot bad bot traffic. And those signals don't have to say again that it's like that it's Emily, but there are signals of humanness that we can detect. So we don't need the personal data to do a lot of that work. There are signals that we need about different kinds of devices and um and patterns, but it's not personal data. And we don't have the ability even to to kind of track that to to a person even if we wanted to try. we we don't have the the data sets and that kind of thing and we don't want those kind of data sets. So so yeah a lot of that those capabilities don't require the kind of like identification of a specific person. It makes sense. One of the things that comes to mind with that in in perspective is something that many people are concerned these days with geopolitical situations regarding government requests. May that be there's governments known for aggressive surveillance. So in what way can we explain a bit of our process there in terms of government requests legal gray areas perspectives and also the history of cloth are there? Yeah I mean we we have a pretty strong history there actually. So we have a transparency report. So that's that's kind of the first thing is we we do have a transparency report where we talk about our approach to government requests for data and we report on the number and type of requests that we've gotten from from governments. And I think the you know the first principle is that one we require due process. So there has to be some kind of legal process in order for us to even respond to that. Now there there are some small caveats for emergencies, but again we are looking for indicators of of due process and that emergency disclosure procedures are being followed and we verify that it's a law enforcement agency looking for that kind of information. But we we do require due process and then we give unless we're required by law not to give notice, we always give notice to whoever's data is requested. So we notify people, we let them know that the information has been requested and we give people the opportunity to object to that if if they wanted to. Um and you know we you know our privacy commitments about not disclosing customer data those extend to government requests. So if we get a government request that is over broad or that we think violates the law, presents a conflict of law, we will push back on that. There was a case and now I'm going to blank on the year but many years ago we fought a national security letter from the United States government. We pushed back on that. Um it was classified at the time. We couldn't talk about it at the time we did it but that that seal order to seal it has expired so we can talk about it now. Um you know but so we have a history of pushing back against these kinds of requests where they're over broad or they seek information where we believe that there is a conflict of law and then we provide notice. And the other thing is in our transparency report, we have what are called warrant canaries. And so a warrant canary is this term of the idea is that you say, here's the list of things that we will never do. And then if you do get some kind of government request that would force you to do one of those things and you've exhausted your efforts to push back and you still are required to do the thing, then you take down that promise. And in that way, you're not really disclosing that you had to do the thing, but you are removing the promise. So then people kind of know. So for example, one of the warrant canaries we have is that we have never installed any law enforcement software equipment on our network and we've never provided um law enforcement a feed of content transiting their network. So, we've got I I think there's five or six warrant canaries in our transparency report and um and I you know I think that's a really important part of our mission too is that we if it's a lawful request and we need to to provide information we will but when we think it's over broad or raising a conflict of law we absolutely push back and then I think the warrant canaries are pretty strong in terms of the things that we won't do and that we will fight of course makes perfect sense and it's part of of clair by now over the years for One of the things that comes up more now than 5 years ago or 10 years ago potentially is the digital digital sovereignty perspective in terms of it's not the European Union right now only that is trying to look more having more data sovereignty specifically but also in other countries other regulations what is the 26 2026 perspective here of this and we have some tools there as well yeah sovereignty is tricky it's very very tricky Um I mean there's a difference between so so that that's evolved. Also the focus for a while was really on data localization. How do we keep our data in in a certain country? I mean that's still the focus. There's still a lot of that. But this idea of kind of sovereignty now also is not only can we let's keep data in a certain but also we want to protect it from outsiders in a different way. So it's an evolving conversation about what it means to have sovereignty and what it means to maybe have you know make sure that the data in a certain ge geographic location is immune from from another government perhaps. So years ago when very GDPR focused the GDPR has rules around whether data can leave the EU and go to the United States for example to be processed and other countries some some other countries have these kinds of rules too. Some of them have it for all kinds of data. Some are also very sector specific like Australia has a law about healthcare data but not other data. So a lot of these these laws had rules and it said well if you're going to transfer data you have to meet these certain contractual commitments and you have to really kind of promise that when you take data out of the sort of home jurisdiction then that data is going to be protected with the equivalent types of protection under law that are guaranteed to it in the home jurisdiction. And so we put in place we have standard contractual clauses under the EU GDPR. We have a number of certifications as you mentioned we did the global crossber glo global crossber rules certification and that that allows for transfers from countries that are part of that consortium and that's a lot of Asian countries. We have the EU US data privacy framework certification which allows for train transfers from the EU and also the Swiss US one. So those those transfers are allowed. So we've taken a lot of steps in those areas and yet there are still countries that don't have those kind of frameworks. India for example where there are also some localization requirements and there are still customers who regardless of these kinds of certifications regardless of the frameworks that allow for data transfers to be kind of considered adequate. There are still customers who feel very strongly that they want their data inspected locally. they don't want it inspected outside of Europe, for example. And so we offer a data localization suite that gives customers the ability to have data inspected in a jurisdiction of their choosing, which means that the inspections and the decryption of the packets only happens in their preferred jurisdiction. So otherwise when it's traveling the world, the packets are encrypted. Um we also have a customer metadata boundary. Right now it's only in Europe, but um if if you want your customer log data, the end user data that's identifiable to use a customer, that data can be stored in the EU and not leave. We we have not been able to offer that in other jurisdictions other than the EU and the US currently. And and so I think our road map is figuring out how we expand those capabilities because as much as we think that privacy, you know, location is not a proxy for privacy and just because there's a law in a certain jurisdiction does not make the data more private or safe in that jurisdiction. We believe that very very strongly that that it's the technology that should secure the data, not the law. We get that customers have legal obligations. There are public sector customers governments who have their own rules about data needing to stay in certain jurisdictions. And so we understand that and we want to make sure that we can meet those customers needs. Also, there's a lot of work being done to figure out how can we bring more of these offerings to our customers to give them the flexibility to use our global network, but also to have data stay in certain jurisdictions in ways that meet their legal obligations. So, I think more to come on that. there's there's not a specific thing I can talk about right now, but but there are a lot there's a lot of work that's being done to um to figure out what kind of options we can give. It's definitely a trend currently for different governments. It's being discussed for sure. But you raised there a very important question which is it's more important to have it secure in terms of privacy not v vulnerable because even if you the data is in your country other uh attackers if it's not well protected other attackers can actually go there and and get the data. So it's also about the protection in place even more important sometimes than um the de localization although it's definitely a trend. One of the things that um we haven't discussed still but I would say it would be the on everyone's mind is AI and privacy. Uh and of course we're not only a company about zero trust and corporate and uh also reverse proxy or CDN. No, we're also a company of builders of developers uh with workers and uh of course even inside the company we're all constant building things and shipping things. What would would you say is um the cloudflare approach of this AI age given that we're a very very in innovative company in terms of the way we want to use AI tools to make the best of what we launched. What is the the balance there between shipping with AI in place but also having a privacy in mind? Yeah. Yeah. I mean that yeah it's it's on everyone's minds right now and I think there's a there's a few different things. I mean, in terms of internally, a couple things that we're really aware of and and talking to teams about is, you know, just proper data use and lots of conversations about what data can and and can't go into AI systems. We have um we have certain approved systems that are allowed for use within the company and and those and so you can't use unapproved tools. So, that's something that's very important for internal development. You know, we have rules around customer data and you you know, you cannot use customer data for certain things. We don't, for example, we're not training on customer content. That's one of our AI principles. It's on we have a trust hub. So on our public facing trust hub, we have a bunch of FAQs about how we're using AI and and how we're kind of handling AI responsibly. So we have some FAQs and we also have our AI principles. And so we we make a number of commitments to our customers there. Um, and so, you know, we do a lot of training internally to make sure people are aware of the rules and and that kind of thing. Access controls set up to make sure that we're protecting data that way. Um, and at the same time, you know, you have to allow for the innovation and we've got this this incredible that that allows for a lot of innovation and a lot of use for our customers to use AI. So, we've tried to develop products for our customers to be able to use AI more safely. So we have a Cloudflare security suite for AI that has a whole bunch of different offerings there that can maybe help protect against certain types of personal data from going into prompts AI prompts. We're allowing you know the sovereignty is kind of an issue with AI also. So um you know we have we try to allow inference um closer to jurisdictions where customers are. We have a I think that we've got a great blog post that our public policy team did a few months ago about um you know AI sovereignty and sovereignty for an AI stack. So I encourage people to check that out. So there's a I think there's a lot of things that again going back to the beginning of our conversation a lot of things that we kind of dealt with around this privacy of personal data that have been applied to AI and um and so in some ways we've always been an AI happening. And as these AI capabilities have expanded, we've been scaling up our deal with AI. One of the things I always find interesting is that it's something that you actually said regarding if you use an AI tool, if you build an AI aside with AI, make sure that you're the human in the loop, you're responsible for what comes out, right? Those perspectives of you're doing great things with AI, but you're the person, the human that will be responsible for what comes out is important. Yeah. Yeah, I mean, I think that's the thing that we kind of all have to remember about AI right now is it's kind of like having a an intern. Um, and it's it's sort of like having this this intern who can be really really helpful, but you need to check their work. Um, and and so yes, having a human in the loop is incredibly important. Um, and we emphasize that for our internal use for for um, you know, or anything that you're going to kind of put like our engineers have to check the code. If if they're using AI for code, they have to check it, make sure that it's accurate that um, you know, it's the same way and it's kind of what we all have to do a little bit in our our own lives now is you have to kind of look, is this an AI hallucination? Is this AI deep fake? You know, we're still in the place where you can't quite trust everything the AI gives you because AI is designed to to say to give you an answer. It's it's designed to kind of bluff its way through. I shouldn't say bluff, but I don't think AI can think about bluffing yet, but it's designed to give you an answer. It's it's not designed to say, "I don't know," or, "I can't help you," or, "I can't find it." it's going to go try to figure out an answer and give it to you whether that answer is correct or not. So, um I think all of us in our daily lives are are kind of figuring this out from time to time and then having to remember that there still is this need to figure out kind of what's true and what's AI and and so I think in terms of what we're building also that's really important is we need to make sure that we've got people checking checking for bias checking for accuracy and making sure to review all that AI intern output. Makes sense. One of the things that I also think is is relevant here is the perspective of the points of failure when you do something with AI. If you're using you mentioned this in the past, if you're using a data set with personal information, maybe that's not the use case where you'll trust more of AI. It's like the point of failure is bigger if you're using actually data. So having that in mind like good judgment perspectives usually are helpful also in the AI age, right? Yeah. I mean I think that's kind of like AI 101 is you don't put your personal data into the AI machine. Um and that's that's something you know I talked to my family and friends about is all of those things. you know that that giant medical report that you got like maybe it's going to help you understand a couple terms but maybe don't put the entire thing with all of your personal data into you know uh chat GPT or whatever and I whatever it is um so yeah always always being careful and and then we also have like like I said I think we have a data loss prevention type tool that can help block personal data from going into prompts um so or if somebody does put some personal data it'll it'll catch that and and and kind of say no. You can't submit that. And so I think we're going to see more and more of these tools, I hope, get rolled out for more public consumption because it's it's hard to think about how to guard your personal data. You can also turn off memory on some of these AI tools that you're using. So they don't remember the data and it won't go into a training set. So I always tell family and friends to make sure to do that and check your settings. check your privacy settings on whatever AI tools you're using so that you can protect yourself that way too. Thinking of 2026 and what's coming in the next few months. From the privacy perspective, what are the main things on your mind that you think are relevant to to mention in terms of maybe regulation, maybe concerns in general, challenges? Yeah. Opportunities. I think so. Well, I right. I mean, I think that's just going to continue to be a theme because everybody's trying to maximize their use of AI right now. And so, that continues to be something that I think about a lot that we think about with our internal teams, our security team internally. That's a that's a big a big deal. And I think we're going to continue to see different different bodies, different governments think about how to regulate AI. You've got the AI act in Europe on the one hand. In the United States, you've got a very anti-regulatory approach. Asia is taking a little bit more of an approach of we're going to regulate, but our regulations are going to be to to really boost AI use. It's kind of all over the map. So, I just think that's going to be a continued area of focus is is watching and seeing what's going to happen and trying to remain flexible with your approach as a as a company because you have to balance something like the AI act in Europe and deregulation of the United States. How do you how do you exist in both places? I think that's going to continue to be an issue. You mentioned uh digital sovereignty. Absolutely. That's going to continue to be a a conversation in 2026. The geopolitics right now I think is as a lot of people know are a little bit interesting and uh colorful. So figuring out how to respond and how to help companies um adapt and be resilient in the kind of strange place we're in um you know right now uh with the geopolitics I think is going to be really important. And I think, you know, again, security is is always an issue. AI is making the bad guys more powerful. So just as AI is giving the good guys more power to do good things, AI is giving the bad guys power to do bad things. And you know, I think we're just unfortunately going to see an increase in deep fakes. We're going to see an increase in the AI coding that allows attackers to develop their their malware more quickly. all of the things that you can imagine the bad guys do, it's ramped up. So everything feels like it's happening kind of faster and more intensely with AI. So I think we're going to see that. And then I think another thing that we're continuing to see and really kind of got talked about a lot last year, but I don't think it's going to go away, is this question of insider threat. And so making sure that your corporate networks are protected not only from outside but also from inside and using tools to help alert and detect if you've got employees who are doing things that they shouldn't be. Um and I think that's where a lot of our zero trust software comes in. It helps both things, right? It's it's you know because it used to be the castle and moat design and that was great if if you trusted everybody inside the castle and and you know just try to keep people out but now there's people inside the castle maybe who aren't so greedy too. So zero trust is is really um I think a key and going to be a key focus of 2026 is we're continuing to just guard against kind of the threats that come from everywhere unfortunately. And with AI that's even more the case. It's even easier to to attack with quality because AI also helps there as well. Yeah. But it helps us fight AI. It can help the good guys too. So that's that's maybe a is a cycle like we're actually improving with AI but the attackers also improving with AI. So it's actually a cat and mouse perspective. Um one of the why not end with the perspective of a wish list. If you had a wish list for privacy, the internet for 2026, maybe even further than that, maybe related to AI or not, what would that be? Like what would you like for privacy to achieve really? Man, that's that's a hard one. I mean, I think I think my wish list would be that we could have a lot better education. I think helping people really understand how they can protect their own data as individuals. So, I'm now just kind of thinking of us as like as individuals and consumers just empowering people with more data and and better choices. That's that's a big wish list and and making some some tools where like I mean we all hate cookie banners. So, if there were easier ways that we could, you know, stop tracking of people on the internet if they wanted to be not tracked, I think that would be a wish list. I think, you know, Yeah. I mean, and then I think things that we can do to detect deep fakes and hallucinations. I mean, that's not always about personal data, but deep fakes are. So, better tools to kind of protect our own digital identities against deep fakes. And then also having control over what happens to to your identity, that would be if I could wave a magic wand and and fix something like that. I think it's more kind of personal oriented. I think in terms of the tools that we have for protecting our customers, I think we have a lot of the great tools. I'm not sure I have a wish list there, but I think it's more on a consumer basis that I I feel like technology is outpacing what us as individuals can do and it's very hard for individuals to keep up with privacy settings and trying to control their own data. So, my wish list would be to tools to make that easier. Makes perfect sense. It's a great wish list. Let's hope there's more education for sure. That's a great one. This was great, Emily. And that's our time. All right. Great to talk to you as always. As always. And that's a wrap.

Get daily recaps from
Cloudflare

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.