FLUX.2[dev] is AWESOME on Cloudflare Workers AI

Cloudflare Developers| 00:07:54|Mar 26, 2026
Chapters9
Announces the Flux 2 model integration in Workers AI in collaboration with Black Forest Labs.

Flux 2 dev dramatically boosts photorealism and character consistency in Cloudflare Workers AI, with multi-reference and multi-image prompts to wire up cool, chained generations.

Summary

Cloudflare Developers showcases Flux 2 dev on Workers AI, created with Black Forest Labs, and demonstrates how Flux 2 builds on Flux Schnell to produce more photorealistic images and stronger character consistency. The presenter highlights Flux 2’s ability to maintain identical characters across images via multi-reference editing, and introduces multi-art form submissions that let multiple input images influence a single generation. A hands-on multimodal playground walk-through compares Flux 1 and Flux 2, revealing noticeably sharper detail and textures in Flux 2 outputs. The video also explains how to feed image references into Flux 2 and then refine results with an additional text prompt, effectively chaining two model runs. Viewers are directed to a change log and developer docs with curl commands, Workers AI bindings, and code samples to get started quickly, including how to use multi-reference images. The overall message is that Flux 2 on Workers AI is a powerful, accessible path to higher-fidelity generations for projects like comics or branded visuals, with practical steps to experiment right away.

Key Takeaways

  • Flux 2 dev on Workers AI delivers significantly higher photorealism than Flux 1 Schnell, with more detailed textures and lighting.
  • Character consistency is enabled by multi-reference editing, allowing the same characters to stay visually identical across multiple images.
  • Multi-art form submissions let multiple input images feed into a single Flux 2 generation, expanding the creative toolkit for complex prompts.
  • A multimodal playground demonstrates side-by-side Flux 1 vs Flux 2 outputs and shows Flux 2 outperforming Flux 1 in real-time renders.
  • Chaining outputs from Flux 2 with an initial Flux 1 render (or other prompts) enables iterative refinement and niche styling (e.g., zebra-pattern statue) across two sequential runs.
  • Cloudflare provides direct curl examples and a Workers AI binding to simplify integration, plus code samples for multi-reference image workflows.
  • You can download generated images from the playground, and Flux 2’s improvements are positioned as ready-to-try for projects like comics or branded visuals.

Who Is This For?

Essential viewing for developers building image-generation features on Cloudflare Workers AI or teams exploring Flux 2 for photorealistic art, character-driven visuals, or multi-input prompts. It’s especially helpful for those experimenting with comics, branding assets, or any project needing consistent characters across frames.

Notable Quotes

"Hey everybody. So today we released a new Flux model to Workers AI along with Black Forest Labs."
Announces the Flux 2 dev release and the collaboration with Black Forest Labs.
"Flux 2 allows us to take all of that stuff and make it even better."
Frame reference to Flux 2’s improvements over Flux Schnell.
"The final thing I’ll mention is when it comes to the sort of workers AI specific implementation we have the support for multi-art form submissions to the API which means that we can pass multiple images into the prompt for the flux 2 image generation."
Introduces multi-art form submissions in the API.
"So this is input that we can pass in. And what I want to do here is first I’m going to kind of simplify this just a little bit."
Prepares the audience for a hands-on demo with inputs.
"What’s cool here is this actually gets fed into the Llama 3.2 model that we have running on workers AI to generate a sort of new fleshed out version of the prompt."
Describes the integration path via Llama 3.2 for prompt refinement.

Questions This Video Answers

  • how to use Flux 2 dev with Cloudflare Workers AI step by step
  • what is multi-reference editing in Flux 2 and how does it keep characters consistent
  • how to use multi-art form submissions in Cloudflare Workers AI
  • how does Flux 2 compare to Flux Schnell in image generation quality
  • where to find Flux 2 dev docs and curl examples for Workers AI
Flux 2 devFlux SchnellCloudflare Workers AImulti-reference editingmulti-art form submissionsLlama 3.2 modelvanilla prompt vs enhanced promptcharacter consistencyphotorealismmultimodal playground
Full Transcript
Hey everybody. So today we released a new Flux model to Workers AI along with Black Forest Labs. We're super excited to announce Fluctu dev is now available in Workers AI. And in this video, I want to show you a little bit about how to use it and give you some resources to go and start adding it into your own projects. Uh before we do that though, let's take a look at what is new in Flux 2D. So for a while now, we've hosted the Flux Schnel model in Workers AI. It's one of our most popular models in the catalog. It's super super photorealistic. It is really really capable. It's like my favorite model that we have for doing image generation. And so adding Flux 2 allows us to take all of that stuff and make it even better. Uh in particular, it's really good at how we kind of describe here. Uh understanding the physical world, which is kind of a crazy idea. Uh it's really good at generating all sorts of things that are photorealistic. Hands, faces, fabrics, logos, all sorts of stuff that is really powerful. Um you can see some examples here like really incredible imagery generated uh just out of the box with the Flux 2 model. Um the other thing that's really interesting about it is the ability to do this character consistency. Um so you can actually give it images as part of the input for a prompt to allow it to sort of keep consistent characters throughout multiple images. So, if you're trying to design like a comic book or something like that and you want characters to look the same through every single iteration of an image, uh you can do that by passing this multi-reference editing feature, uh which we'll actually look at here in just a little bit. Um the final thing I'll mention is when it comes to the sort of workers AI specific implementation um we have the support for multi-art form submissions to the API which means that we can pass multiple images into the prompt for the flux 2 image generation and make use of all of them uh inside of what's generated by the model. Um so let's actually take a look. Let's do something fun here and generate something. Um this is our multimodel playground which I will um share a link to in uh all the places I'm sharing this video. You can jump and kind of explore it yourself. Um this is what we have uh as the image arena example. So normally this actually goes and um kind of generates a prompt and compares a bunch of different models that we have um to kind of see what they look like. You can see honestly the evolution of different models as we've gotten more advanced and take a look at like how these models have gotten better at generating images. But I've kind of changed it here just a little bit so that we can actually see a comparison of the Flux one model which is what we've had up until today as well as the new Flux 2 model. Um, and the way that this works here is that we have this sort of basic prompt which is the Alps at sunset. That's our default prompt here. And then we also have this additional text that says enhance the above prompt. So I can feed it to an image generation model reply with just the enhanced prompt. And what's cool here is this actually gets fed into the Llama 3.2 2 model that we have running on workers AI to generate a sort of new fleshed out version of the prompt that then gets passed simultaneously to both the flux one schnell model and the flux 2 dev model. So let's go ahead and run that and you can see that it's already uh taken that prompt. It's it's sort of refined it here and generated. So here's the flux one model. It took about 2 seconds. Now the flux 2 model is going to take a little bit longer. It is much more computationally intensive. Um, but what's cool here is we'll be able to compare between the two and see the difference. Uh, ideally looking significantly better, right? So now if we zoom in here, you can see this is already really impressive, right? This is like a I mean to me, if someone posted this on Instagram, I would probably believe this is real. Um, the Flux 2 model though is even more spectacular, right? So there's all this level of detail here. You have all this sort of multi-stage different parts of the image. It's really impressive and and really powerful. So you can already see between Flux 1 and Flux 2, there's been a huge jump in quality. The image that came out of Flux 2, I would say, looks much better. It's much more photorealistic. But what I want to show now is you can actually pass images into Flux 2 and use it to both refine images or use them as reference points uh in whatever you're building. So you can see we have this uh kind of collection of four images here. So this is input that we can pass in. And what I want to do here is first I'm going to kind of simplify this just a little bit. I'm going to just so we can kind of fit it all onto one screen. So, I'm going to delete these um you know, these kind of prompt refining tools. And I'm just going to use a simple prompt here. Uh I'm going to do something like maybe uh a woodworker working on a wooden statue of a corgi. And what I'm going to do is uh connect this to both of these models. So, this is the flux one model and then the flux 2 model here. And let's just see what it looks like if I run that basic prompt here. Again, I removed that sort of prompt refinement stage. Um, you can see this is flux one. It's okay, right? The corgi is missing an ear, which is funny. Um, everything about it's like pretty kind of cartoony and there's like this interesting sort of it's like this AI to it, right? Which is which is sort of funny. Now, if I come down here, this looks much more photorealistic. Um, you can see there's like kind of this wooden texture here, which is super cool. Um, there's like all this level of detail here. This is like a significantly better image than uh what we have in the first model. But now what is very cool about this is I can refine this and actually pass this as sort of an additional stage to a second uh instance of the flux dev uh model. So what I'm going to do is connect this here. So pass this image in as one of the references inside of this model. And then I'm going to grab another text input here and I'm going to say maybe something like uh change the color uh of the statue to be zebra pattern. And then I'm going to take that and pass that in as the text prompt here. So essentially I'm taking the output of this first run, chaining it into the second flux 2 run, and then adding an additional text prompt to sort of customize this even further. So I'll go ahead and run that. Once again, we'll see a new version here. Good news, this one has two ears, which is great. Um, still has that I'm trying to figure out what this is. You got some interesting hair going on. Uh, but you know, still has that super stylized look. Um, and now if we come down here, we can see once again this is really, really cool looking. Uh, and this time the, uh, the woodworker has like a wooden tool that he's working with, which is super cool. And now what's going to happen is that's going to get passed as a reference into the second instance of the model additionally with this uh, text prompt to refine the image. And so what we should see here when it completes is that it's taken it and actually sort of refined it. And and look how similar those two are, right? So the vast majority of this image has stayed the same. You still have the woodworker. He's still working on stuff, but we said change the color of the statue so it has the zebra pattern. And it's really cool looking actually. Very, very impressive. Um very impressive output here. So if you're playing with the multimodal stuff, you can at any point download an image here. You can click this and download it. Um you can grab all this stuff. This is uh fully available for people to play with. So I'll put a link in the description. You can check out the multimodal playground. Um the last thing I want you to check out is the change log which I will also link. Uh this is our announcement in our uh developer docs for this new model including instructions um on how to get started using it inside of workers AI. So you can do direct to curl here. You can use the workers AI binding if you're building inside of Cloudflare workers. And we have all these code samples so you can get up and running very very quickly including multi-reference images. So sometimes working with images can be a little tricky. Um we have some code here that you can just do immediately. Um grab this code, plug in some image URLs and get started playing with multi-reference uh images. And then of course as I mentioned at the beginning, check out our blog post if you want to learn more about kind of how we got this running, how we've worked with Black Forest Labs on this feature. Uh we're super excited. This is a really powerful model. It's a lot of fun to play with. So, let me know what you think. Give it a shot.

Get daily recaps from
Cloudflare Developers

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.