OpenClaw tricks: change models, SSH & more
Chapters6
The video introduces practical tips to optimize a remote OpenClaw/Worker instance on Cloudflare, covering access methods, cost-saving strategies, and general setup improvements.
Practical OpenClaw optimization tips for Cloudflare Workers: cost-saving instance tweaks, model switching, SSH access, and easy collaboration.
Summary
Confidence from Cloudflare Developers walks through actionable ways to optimize OpenClaw on Cloudflare Workers. He starts by showing how to trim costs with container-based instances, highlighting that you don’t need top-tier horsepower and walking through standard vs. custom instance types. He then demonstrates how to switch the model your OpenClaw runs on, including using Cloudflare Workers AI for free 10,000 neurons per day and integrating various providers via the AI gateway. The video also covers securely SSH-ing into the OpenClaw container, including enabling SSH in wrangler.json and deploying with npm run deploy. Confidence explains how to pick the right model provider (OpenAI, Anthropic, Replicate, or Cloudflare AI gateway) and where to configure keys. He adds a quick tour of the Cloudflare AI gateway UI to show supported providers and model IDs like GLM 4.7. Finally, he shows how to add team members via Cloudflare Access policies, making it easy to share a running OpenClaw setup with others. The overall message is that you can run OpenClaw efficiently on Cloudflare, with flexible hardware, cost controls, and accessible collaboration features. Confidence also references his prior videos for setup and secure self-hosting tips, linking to a broader workflow.
Key Takeaways
- You can reduce Cloudflare Worker/OpenClaw costs by selecting a standard or custom instance type instead of the highest tier, and by enabling sleep after inactivity (for example, 3 hours).
- Custom instance types let you define CPU cores, RAM (in MB), and disk (SSD) to tailor OpenClaw to your needs.
- Use the npx wrangler secret put sandbox_sleep_after setting to automatically put an OpenClaw container to sleep after inactivity, lowering ongoing costs.
- OpenClaw can run models from multiple providers, including OpenAI and Anthropic, as well as via the Cloudflare Workers AI gateway for cost-free options up to 10,000 neurons per day.
- Cloudflare Workers AI gateway provides a centralized way to select and switch models without changing code, with visible model IDs such as GLM 4.7 and GPOSS 12B.
- SSH into the OpenClaw container is supported via wrangler containers SSH using the container ID, enabling direct Linux shell access for on-the-fly tweaks.
- You can add team members to your OpenClaw instance by configuring Cloudflare Access policies and inviting emails, making collaboration simple.
Who Is This For?
Developers running OpenClaw on Cloudflare Workers who want to optimize costs, experiment with different AI models, and collaborate securely with teammates.
Notable Quotes
"The beautiful thing about instances is that you can also set a custom instance type to specifically request the exact kind of hardware you want."
—Demonstrates flexibility in hardware provisioning for cost and performance control.
"You can actually tweak things a bit so that you don't end up spending a lot on your OpenClaw instance."
—Introduces the core premise of cost optimization early in the video.
"We support like literally every provider out there, including Replicate, and you can use models on Cloudflare Workers AI."
—Highlights the breadth of model providers and the AI gateway integration.
"This is running now. So I can try to interact with my instance and say hello."
—Shows a live verification that the model switching via Workers AI works.
"And you can SSH into your container instance and do whatever you need to inside the Linux environment."
—Presents secure, hands-on access to the container for advanced tweaks.
Questions This Video Answers
- How do I reduce costs when running OpenClaw on Cloudflare Workers?
- Can I use Cloudflare Workers AI gateway to run GLM 4.7 in OpenClaw?
- How do I enable SSH access to a Cloudflare Workers container with Wrangler?
- What are the steps to add teammates to my OpenClaw instance using Cloudflare Access?
- What model providers are supported for OpenClaw on Cloudflare Workers AI gateway?
Cloudflare WorkersOpenClawWranglerCloudflare AI gatewaySSH accessContainerized workloadsCost optimizationModel switchingCloudflare AccessCollaboration
Full Transcript
Hello everyone, welcome back to the channel. My name is Confidence and I am a developer advocate at Cloudflare and it's great to have you all back here. In the previous videos, we showed you a detailed guide on how to set up mult walker or open claw on cloud workers. I'll have a link to the top corner here and also in the description. And in the last video we made on this topic, we also showed you how to securely access your OpenClaw instance if you choose to go selfhost it on a device you own like a Raspberry Pi or Mac Mini and how to access it through workers VPC.
I'm also going to leave a link to the top corner and also uh in the description so you can go check that out. But in this video specifically, I'm going to be sharing with you tips and tricks to optimize your remote worker or openclaw instance that's hosted on Cloudflare workers. So I'm going to show you everything you need to know including how to access your instance via SSH, how to save cost, and all of the goodies in between. So let's get started with it. The first tip I'm going to be showing you is how to save cost by optimizing your instance.
And that's because most worker or open claw on cloudware runs inside of a container sandbox. And containers are actually really cool because they enable you run Linux containers that can access up to 6 TB of RAM, 1,500 VRA CPUs and 30 TB of SSD. But the truth is you don't need all of that horsepower to run open claw on Cloudflare. So you can actually tweak things a bit uh so that you don't end up spending a lot on your instance. And one cool thing about containers on Cloudflare is that you only build when your instance is running.
So I'm just going to go to the browser real quick and show you some of the instance types on uh Cloudflare. So I'm just going to open this in a new tab. And this is the developer docs. And you can see all of the instances you uh have access to the standard instances. So you have the light, you have basic. You would need at least a standard one to run uh open claw on cloudflare. But you can go up to standard 4 and in fact uh set a custom instance type. So let me go to my instance uh my motor worker instance or openclaw instance and let's get this configured.
I'm just going to open up the wrangler.json JSON in my editor. And down here, you just scroll down to the bottom uh where it says instance type. You can change that from a standard 4 to anything you want. I'm just going to set this to a standard two. Let's give this a save and we can deploy the application. So, npm run deploy. And that's going to get the new instance configuration I have deployed to Cloudflare. Uh the beautiful thing about instances is that you can also set a custom instance type to specifically request for like the exact kind of hardware you want to run your instance on.
So I'll also show you how to do that um real quick. Okay, that's done. So let's go open up the Wrangler.json file and um I'm just going to give this a custom instance type. So let's go delete the instance string. This is going to be custom instance type which is an object. And here I can specify the number of uh CPU cores I want to give this instance the number of RAM in megabytes. This is going to be 4 gigs. And also the disk space which is going to be SSD. So you can also specify the exact custom instance type if you want to do that.
But we'll just be using standard too for this because I think that's uh generous enough. One other thing you can do to optimize your instance cost is to set it to go to sleep after it's been inactive for a while. So you can do that by setting an environment variable. And you do that by uh writing this command npx wrangler secret put and sandbox sleep after. So I'm just going to hit enter. And I want my instance to go to sleep after 3 hours of inactivity. So this is going to three h. You're not going to see what I have typed in, but that's three h which is 3 hours.
And when I hit that, it's going to get that environment uh variable configured and deployed to my um Cloudflare worker. And then that's going to get applied to my container instance. And of course, you can run mpm run deploy to make sure that that change has been applied. Moving on to the second trick I'm going to be showing you. Another cool thing you can do is tweak the model your open claw instance runs on. By default, open core and cloud workers runs on anropics 4.5. That's a cheaper model compared to OPOS. But you can actually use whatever model you want from whatever provider you want including open AAI and even also using Cloudflare Workers AI.
So I'm just going to quickly show you how to get that done and that's also by updating the API key of the provider you want to use. Um, Anthropic and Open AI are supported directly by configuring the um, respective environment variables, but you can use any provider you want by using the cloud for AI gateway. I'll show you that in a bit. So, if you want to use open air for instance, go ahead to set this environment variable which is the open API key or you could go configure it using cloudy AI gateway. And if we switch over to my dash, I'm just going to show you all the providers supported.
So, this is my dashboard. I'm going to go to AI. I'm going to go to AI gateway. And I already have my AI gateway set up, which was done using the guide in the previous video. Again, I'll link to it in the description below. Um, so this is my AI gateway for this video. And look at all the providers that are supported. We support like literally every provider um out there, including Replicate. Shout out to Replicate. Also, you can use models from Replicate. But I think a really cool thing here you can also do is use models on Cloudflare Workers AI.
And these models are very generous. We give uh 10,000 neurons per day free of charge to any user. So you actually don't even have to spend money using these models with open claw like whatsoever. So I'm just going to go up here and to see how that's done, go to workers AI and I should be going to models here. So that's models and you can see all of the models that are supported. I think GLM 4.7 is actually really great and this is the model ID. So I'm just going to get this copied and what I want to do here is go set my um mode worker or openclaw instance to this model.
So I'm going to grab the right um secret. So this is going to be I'm just going to quit that. This is going to be the secret I need to update. And for the value, what I want to set this to is um the name of the model prefixed by the Walker AI um provider. So I'm just going to set that here. I'll show you what I entered in in a bit. So I have that set up right now and I'm just going to show you what I entered in that field. So this is workers AI followed by the model ID which in this case is GLM 4.7 flash.
And you get the model ID from uh this path right here where we copied it from the model page. And that it's going to be the same for any model you choose whether it's GLM or maybe you prefer to use OpenAI's GPTOSS. Uh this is GPOSS 12B. This is where you grab the model ID and you just have to prefix that by the workers AI provider and enter that into the uh CF AI GTO model environment variable. So I already have that entered and that's been deployed. And now if I go check my instance. So I'm just going to go to the dashboard.
Let's refresh this because we've made a deployment. And uh this is running now. So I can uh try to interact with my instance and say hello. And you can see we have the response coming back. But this is using the model on uh workers AI which is the GLM 4.7 flash. And how you know is you head back to your gateway. The gateway you have configured for your um openclaw instance or your moto worker instance. And if you go to logs uh you should see the request that just came in right now is being run on the cloud workers AI model which is completely free to use.
you get 10,000 per day and that should cover your usage cost without you having to pay for using a premium model hosted somewhere else. The third tip I'm going to be sharing with you is how to securely SSH into your container instance. Uh because we're actually running a container in the background and if we go to compute go to containers uh this is the container where I have my open claw instance running. So how do I access it and manage it? because um the truth is the control UI is really great. You can do a lot of stuff here to customize your instance, but in certain situations, you just want to have shell access into the instance so that you can make certain changes you want to make in a Linux environment.
So, how do you do that? I'm going to show you how to do set that up right now. So, let's head back to my uh Wrangler configuration for my work worker. Uh so, that's the Wrangler.json C file. And what I want to do here is set up SSH access by entering in this configuration value. All right. So this is the value I have just entered. We just enabled um SSH val Wrangler. This has been enabled. And the next thing we want to do here is set up the authorized keys that can access this instance. You want to give this any name.
This could be any name you like to identify this SSH key. And then you want to also add your public SSH key. And um you can grab that from your home directory. So this is going to be where you have ls in your home/ ssh.pub. It's going to be that file. That's where you grab the value from and paste that in here. I'm going to have mine uh configured off screen so you don't see my public SSH key. And then when you have that done, feel free to go ahead to run the mpm run deploy command.
And that's going to get that configuration uh deployed so you can SSH into your instance. All right, I already have mine set up off screen with my public SSH key. So I'm just going to hit the deploy button and uh this is going to get my um SSH configuration deployed so I can SSH into the container instance and make changes to it. All right, that is deployed. And if I head over to the dashboard here, I'm just going to close this tabs I'm not using anymore. We head over to the dashboard and we take a look at the container.
Yeah, it's it's running. I think I have everything running without needing to restart the container. That's cool. And the next thing I need to do is actually SSH into this instance. So this is the command you need to SSH into your container instance. It's MPX Wrangler containers SSH and then the ID of the container you want to SSH into. To grab that ID, I'm going to head back to the dashboard. Head back to instances. Click on the one instance I have running. And then I'm going to grab the container ID, which is this ID over here.
I have that copied. I'll paste that in right here. Hit enter. And I should be able to SSH into that instance. And you can see we are in that instance. So I'm just going to do a cat etc issue. You can see that we're running Ubuntu 22.04. And if we do a quick ls, you can see this is the skills directory of the multalker or the open claw instance. And we can actually go in to manage more stuff here. So this is really cool. And I'm just going to exit out of this. And that's how you SSH into your OpenClaw instance on Cloudware.
The last tip I'm going to be showing you is how to add more members to your OpenClaw instance or your Motorworker instance. This is because in some cases you want to share your instance with family members, friends or colleagues. So how do you actually get them added to your instance? To do that, head back to the dashboard and you want to go to workers and pages and go select your modebot instance. Go over to settings. Go to the URL where you have Cloudflare access configured and you want to click on manage Cloudfare access. And that's going to take you to the zero trust dashboard where you have that um application, the access application configured.
And you can see all of the rules here. What we want to do is uh go to the policies tab and you can go add a new policy or edit the policy you already have which in this case is the mode sandbox production policy. I'm just going to edit this and here I can add more email emails if I want to. So I can add my email two for instance and keep adding more emails and that's going to get that email added. And of course I can go save this and anyone who has access to the emails that are listed in the access list can access my instance which is really cool.
Awesome. So that's it for this video. Motorworker or open claw on Cloudflare is really cool because you get to run your instance without buying a new hardware. And I hope these tips and tricks have been useful to help you optimize your instance and show you a few things you may not already know how to do. So, if you're already using Open Go, please let me know in the uh comments below. I would like to see what automations you are using it for. And with that, I'll catch you in the next video. Don't forget to like, share, and subscribe.
And I'll see you in the next one. Take care.
More from Cloudflare Developers
Get daily recaps from
Cloudflare Developers
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.









