Build reports using Natural Language with Dynamic Workers

Cloudflare Developers| 00:07:04|Apr 4, 2026
Chapters11
The chapter frames a common data-privacy challenge: you have data and want AI insights in plain English without necessarily sending sensitive data to an external model.

Cloudflare’s Dynamic Workers let you build natural-language data reports without exposing data or deploying code, using edge sandboxing, TypeScript bindings, and pay-as-you-go compute.

Summary

Cloudflare Developers showcases a groundbreaking approach to querying data with plain English by using Dynamic Workers. Rather than sending sensitive data to a model, the AI writes JavaScript that runs in a sandboxed worker, with access strictly governed by a TypeScript interface. The demo emphasizes no deployment steps: the generated code is handed directly to the worker, executes at the edge, and returns live visualizations. Data bindings define what the AI can access, ensuring credentials and sources stay hidden. The worker runs in a restricted, sandboxed environment with no outbound network access, producing a chart and a sharable, live report URL. Since the code is generated once and cached, the underlying data can refresh without redeploys, keeping reports up to date. Cloudflare contrasts dynamic workers with traditional static deployments, highlighting cost efficiency: you pay only when compute happens. The talk ends with practical notes—how bindings act as keys, how to test locally, and where to find the agent and worker docs for deeper exploration.

Key Takeaways

  • AI writes a complete JavaScript worker based on a plain-English prompt, without exposing the entire dataset to the model.
  • Data bindings restrict the AI to a defined interface in TypeScript, so the model calls only sanctioned methods and never sees behind them.
  • The worker runs in a sandbox with no network access and no external API calls, ensuring data never leaves the environment.
  • Saved reports produce unique, live URLs that fetch fresh data on every load, thanks to edge execution on Cloudflare’s planetary network.
  • Code is generated once, cached at the edge, and the same static code can be reused while the data it reads updates automatically.
  • You only pay when compute happens (LLM calls and worker execution); idle code incurs no costs.
  • Dynamic workers delegate all route planning and execution to the AI within strict, predefined bindings, enabling flexible, on-demand logic.

Who Is This For?

Essential viewing for developers building data-reporting tools who want to keep data on-premises, and for teams curious about edge-native AI where bindings and sandboxed workers enable safe, scalable AI-generated logic.

Notable Quotes

""The model's generating JavaScript, but we describe the contract in TypeScript so it knows exactly what to call and when to call back.""
Shows the role of TypeScript interfaces as the contract between AI and the worker.
""The worker existed for exactly as long as it needed to. And if I need it later, I can click out and I can see it.""
Emphasizes no-deploy, ephemeral, on-demand worker lifecycles.
""A dynamic worker is kind of like ride share... Every ride's a little bit different.""
Offers intuition for how dynamic, AI-driven execution adapts per request.
""The model used data get AI data exactly as described... it has no idea what's behind it.""
Highlights the binding-based data access model and data privacy.
""You only pay for the ride. I mean, or if you're using our models... if you're idle, you're idle.""
Captures the pay-as-you-go economics of dynamic workers.

Questions This Video Answers

  • How do Cloudflare Dynamic Workers keep data from leaving my network when using AI to generate reports?
  • What are data bindings in Cloudflare Workers and how do they secure data access?
  • Can I generate live, edge-rendered reports with unique sharable URLs using AI-generated code?
  • What is the difference between a static vs dynamic worker in Cloudflare, and when should I use each?
  • How do I get started with building AI agents on Cloudflare and testing locally?
Cloudflare WorkersDynamic WorkersAI-generated codeData bindingsEdge computingSandboxed executionTypeScript interfacesPlanetary networkServerless computingAgent architectures
Full Transcript
So, here's a problem. Tell me if this  resonates. You You've got data, survey results,   customer records, internal metrics, and you want  to ask questions of it in plain English. You want   to generate a report. And to do that, normally  with AI, you got to send the data to the model,   all of it. Maybe that's fine, but maybe it's  sensitive. Maybe you don't want it leaving your   infrastructure at all. What if AI could write  code to analyze that data without ever seeing   the data itself? That is what just happened.  When I typed that prompt, it went to workers AI,   Cloudflare's built-in inference engine. The  the model didn't answer in words, right? It   wrote a complete JavaScript worker, every line  of it. But notice it's already calling in data,   a data binding that we haven't talked about yet.  We'll come back to that. But most of the time   when AI generates code, you paste it somewhere,  right? You deploy it, you wait. We didn't do any   of that. This code string was handed directly  to the worker. No deploy step, no waiting,   no worker. The worker existed for exactly as  long as it needed to. And if I need it later,   I can click out and I can see it. And you can see  how fast that's running. And the way we give it   to the AI is before that model writes a single  line of code, we tell it what it has access to,   not the data itself, just the shape of the  interface. In fact, a TypeScript interface   here. It's a definition. The model's generating  JavaScript, but we describe the contract in   Typescript so it knows exactly what to call and  when to call back. LMS are pretty cool like that,   right? So the the model uses this the model used  in data get AI data exactly as described. It wrote   caller to call a method it was told existed but it  has no idea what's behind it right it's like you   know when you hand your keys to a valet you don't  give them your whole key chain your house keys   your office keys all you hand over one key right  that's all this code gets one key get AI data it   can call that method only that method it can't see  how it works it doesn't know where the data lives   what the credentials are behind it it doesn't  even know and the thing is is it's not just the   data access that we can control this worker has  no access to the internet either. It can't make   outbound requests. It can't call an external API.  It can't exfiltrate anything. You compose exactly   what you want it to do before it runs. What data  it can see, what it can reach. The model wrote the   code, but we decide the rules here. Let's run  another one. So, here's another demo here. Um,   we'll just kick that off. The the model never  saw this data, right? It's going to write the   code that fetches a controlled interface. It runs  in a sandbox environment with no with no network   access and it hands back a chart. That's the whole  trick. And now I can save this report, right? So I   can go and I can click save and it's going to pop  up down here and I can open this. And again, it's   super fast, right? It's it's on the edge. It's  not a cache screenshot, right? That's the worker   responding live. The same code, the same binding,  the same data, but it's got this unique URL. It's   a real sharable link, right? Anyone with that  link gets a live report, not a PDF, not a static   export, a worker that runs every time. And it's  fast, right? Because it's running on Cloudflare's   planetary network. It's close to wherever your  users are. And here's the part that makes it   interesting at scale. The worker's code is frozen,  right? It's generated once and it's cached. But   the data isn't. Every time someone loads this  report, the worker calls that get AI data fresh.   If the underlying data set updates, new survey  results, new countries, corrected figures,   every saved report picks that up automatically. No  redeploy, no rebuild. The code is static. The data   live. Each one of these is a saved worker, right?  The same code. So, so and I can also delete them,   right? Gone. No tear down script, no containers  to stop. It's not a server sitting there,   right? You're only paying for when you actually  use it. That worker existed. It did its job and   now it costs nothing because it's literally gone.  And in case you missed it, a Cloudflare worker is   a small piece of JavaScript, right? And or  anything that compiles to JavaScript. It it   runs on Cloudflare's planetary network, not on  your server. It starts in milliseconds because   it uses the same V8 engine as your browser. It's  not a container. It's not a VM. It's serverless,   right? So, you only pay for what you use. No  requests, no costs, right? it it scales up, it   scales down. And let's try this. Let's try this.  I worked with AI a little bit on this uh analogy   here. A regular worker is like a bus, right? It  runs the same route every time. You decided where   the stops are when you deployed it. It's the same  road, the same behavior every request. A dynamic   worker is kind of like ride share, right? You  don't decide the route in advance. You're not even   driving, right? You describe where you need it to  go and it figures it out on the spot. Every ride's   a little bit different. It's got a different  destination, a different route, and sometimes a   completely different kind of vehicle. And you only  pay for the ride. I don't know if that's the great   analogy, right? Because you kind of for the bus  you pay one thing and for the the car you don't.   So like check this out. So I really want you to  to hone in on this. You only pay for when the   compute's happening, right? So like if there's an  LLM call happening, you're not paying for that. I   mean, or if you're using our models, but if you're  like going off to someplace else and you're you're   not use you're when you're idle, you're idle.  You're really idle. The thing is alive for this   whole time, but you're only hitting with a when  the compute happens. And that saves you a ton of   money. Now, sure, if you're running code that  you didn't write, code that an AI generated on   the fly, the obvious question is this. How do you  keep that safe? And that's what bindings are for.   You compose exactly what the worker can  see before it runs. Not raw credentials,   but an RPC interface. The worker calls a method,  it never sees what's behind it. That's why AI   generated code could run and read real data from  65,000 survey respondents without the model ever   touching it. The model didn't see it. The data  never left the binding. All right. So, so workers   run at the edge. It's super fast. No server  required. Dynamic workers take that further. You   hand them code at runtime in a sealed sandbox that  you control. Bindings are the keys that you hand   over and you decide what that code can touch.  And the use cases are pretty broad, right? So   maybe AI agents that write and execute their own  code. Platforms that run logic your users provide,   right? Ideas that go from prompt to running app  in seconds. The code for this demo is uh linked   below. Uh it's also here uh if you're testing  it out, deploy it yourself. It's one command,   right? So, so if we come here, there's there's a  deploy to Cloudflare button. Boom, you got it. You   can play with it a little bit. If you want to go  deeper on building agents on Cloudflare, the agent   docs are linked here, too. And if you're brand  new to workers, I think you should start there   first. I hope this quick demo inspired you to play  with dynamic workers. Let me know in the comments   what you're thinking about building. Thanks  so much for hanging out and we'll see you real

Get daily recaps from
Cloudflare Developers

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.