Prevent data exposure in AI

Cloudflare| 00:06:12|Mar 26, 2026
Chapters5
Context on why enterprises ban or limit GenAI tools and the need for nuanced data protection beyond binary allow/deny.

Cloudflare shows how to protect sensitive data when employees use AI tools, balancing productivity with robust data-loss protections.

Summary

Cloudflare’s overview makes a clear case: blanket bans on GenAI are shrinking in value as AI drives real productivity, but data exposure remains a top risk. The speaker details a practical, context-aware data protection strategy that goes beyond simple allow/block decisions. Cloudflare’s Sassy solution uses AI-powered DLP to detect sensitive data in prompts and logs, covering PII, source code, customer data, financials, and credentials across major AI platforms like Google Gemini, ChatGPT, Claude, and Perplexity. A key feature is distinguishing between content and intent, and enabling AI context analysis to reduce false positives for data at rest. The demo walks through DLP profiles, AI context analysis, and the integration with secure web gateway policies to enforce blocking, logging, and user coaching when risky prompts are detected. Viewers see how on-ramp protection prevents data from leaving Cloudflare’s network, with real-time coaching prompts and a detailed prompt log for audits. The video also underscores the inevitability of some data oversharing and highlights the need for a balanced governance approach that still preserves AI-enabled productivity. Finally, Cloudflare emphasizes broad coverage, including APIs and events hooks for GenAI tools, to secure AI interactions end-to-end across human-to-AI and machine-to-machine flows.

Key Takeaways

  • AI-driven DLP detects and blocks prompts containing sensitive data (PII, source code, customer data, financials, credentials) before it leaves the network.
  • AI context analysis adjusts detection confidence by considering surrounding context, reducing false positives for data at rest.
  • DLP profiles are linked to specific integrations (CASBY for OpenAI and Enthropic) and apply to secure web gateway policies for data in motion.
  • Content versus intent detection lets teams tailor protections; you can disable content filtering and rely on intent-only checks if appropriate.
  • When a risky prompt is detected, Cloudflare blocks the request, logs the DLP payload, and returns a coaching prompt to users.
  • The on-ramp approach stops sensitive data at the Cloudflare edge, preventing transmission to AI services like Gemini.
  • Logs provide a complete view of AI-related DLP violations and user prompts for post-incident analysis and policy refinement.

Who Is This For?

Security engineers and IT leaders responsible for enabling AI in the enterprise while protecting sensitive data. Ideal for teams evaluating Cloudflare’s data protection stance when adopting GenAI tools.

Notable Quotes

"When a user inputs a prompt, Cloudflare detects, blocks, and logs that prompt with sensitive data."
Opening summary of the core capability: prompt-level detection and response.
"We leverage multiple language models to understand not just the content but also the intent of prompts."
Explains the AI-powered approach to nuanced prompt analysis.
"If traffic matches one of the DLP profiles and if it's inside a Gemini prompt or upload, then we're going to block the request, log the DLP payload, and record the prompt for future reference."
Demonstrates the enforcement workflow at the edge.
"The reality is that your workforce will likely overshare data with AI."
Gives a reality check on user behavior and the need for guardrails.
"Coming back briefly to the DLP profile, I'm going to disable the content filter on this page and simply leave intent."
Shows how to tune between content and intent protections during the demo.

Questions This Video Answers

  • How does Cloudflare's AI security protect sensitive data when employees use ChatGPT or Gemini?
  • What is AI context analysis in Cloudflare DLP and how does it reduce false positives?
  • How do DLP profiles integrate with secure web gateway policies for AI prompts?
  • Can you log and coach users about AI data usage without blocking legitimate productivity?
  • What options exist for protecting data in API integrations with GenAI tools?
Cloudflare SassyData Loss PreventionAI securityGenAI protectionPII detectionOpenAIGoogle GeminiChatGPTClaudePerplexity
Full Transcript
In spring 2023, an employee from a household name electronics company leaked confidential information by using chat GPT to review internal code and docs. That incident led to a ban on Genai tools by that company and many enterprises followed their example. A few years on, a blanket ban on Gen AI apps may feel too extreme because the productivity and competitive advantages of using AI are too big to ignore. But every business is still worried about making headlines by exposing sensitive data with AI. Today, teams need a data protection strategy that moves beyond the binary decisions to block or allow and instead extend visibility controls over user interactions based on context. Cloudflare helps you do just that. Safeguarding data flows when your users interact with AI tools so you can safely adopt new tools with confidence. When a user inputs a prompt, Cloudflare detects, blocks, and logs that prompt with sensitive data. We do this with data loss prevention or DLP detections for PII, source code, customer data, financial information, credentials, and more. We recognize that traditional DLPs use simple keyword matching or regular expressions, which is insufficient for the nuance of AI traffic. That's why our DLP uses AI for AI security. We leverage multiple language models to understand not just the content but also the intent of prompts. This enables our AI powered prompt protection which includes systematic detections, topic classification and security guard rails for major platforms like Google Gemini, Chat GPT, Claude and Perplexity. So, if the topic of a prompt is intended to request PII or malicious code or attempts to get around an AI model's policies, Cloudflare will block and log that, too. Let's take a look at these data protections and topical guardrails in action. In this video, we're going to cover how Cloudflare Sassy solution protects sensitive data in your organization now that AI is a part of your employees workflows. We'll approach it from three places. Content detection, intent detection, and full visibility into what prompts people are sending. Let's start in DLP profiles. These are the detection rules we're going to apply to both data in motion and data at rest across your organization. If we look at a profile targeting personal information, we can see a number of predefined detections like bank account numbers and social security numbers. But moving over to another profile specifically designed for AI prompts, you'll notice it distinguishes between content and intent. and we'll dig into that difference in a minute. Below that, we're going to turn on AI context analysis. Now, this lets the DLP engine adjust its confidence in a detection based on the surrounding context. So, ultimately, you get fewer false positives for data at rest. Let's look at our CASBY integrations for enterprise AI services like OpenAI and Enthropic. We can see some of the major detections here are linked to DLP profiles that you apply to the integration. In the next page, we can see we've already found a lot of sensitive information that hasn't been properly stored. As I click into one of the findings now, if you're going to secure data in motion, you'll need to apply these DLP profiles to a secure web gateway policy as we're doing here. According to our expressions, if traffic matches one of the DLP profiles and if it's inside a Gemini prompt or upload, then we're going to block the request, log the DLP payload, and record the prompt for future reference. We'll also give the user a helpful prompt to coach them about proper AI usage policies. Now, I'm going to upload a bunch of customer data to the AI and ask it to parse it for me. Gemini is going to think for a while, but then eventually it returns a network error. Now, this is because my traffic is being on-ramp to Cloudflare's network before it reaches Gemini and it's applying the security profiles I set against my outbound traffic, which stops the message entirely before it leaves the Cloudflare network. You can also see a helpful coaching message appear at the bottom of the screen from our device client. Coming back briefly to the DLP profile, I'm going to disable the content filter on this page and simply leave intent. This means the AI won't do anything about sensitive information unless it detects the prompt is inappropriate, risky, or malicious. Coming back to the AI, we can see it parses the customer data and flags it for duplicates. However, when I ask the AI to tell me specifics about a high-v value customer, the prompt is considered to have malicious intent and stop the same way as before. Finally, in logs, I can see a list of all the AI related DLP violations that have occurred in my network. After decryting the log, I can see what my users have been asking and whether or not that poses any risk to my organization. The reality is that your workforce will likely overshare data with AI. In fact, a recent survey found that 93% of employees admit to putting information into AI tools without approval. Security teams must strike a balance between enforcing controls and unlocking productivity with AI usage. This demo focused on how Cloudflare can help you navigate that balance primarily with inline data protections. But those data protections also extend to outofband integrations via APIs or event hooks with leading gen AI tools like Chat GPT, Gemini, and Cloud. And with Cloudflare's broader AI security suite, you can secure the life cycle of AI communication across both humanto AI and machine to machine interactions. To learn more how Cloudflare secures AI adoption, check out the rest of our demo videos or try it for yourself by creating a free account on cloudflare.com.

Get daily recaps from
Cloudflare

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.