Architecting the Perfect Workflow: Integrating Multiple AI Providers for Devs

Tony Xhepa| 00:19:25|May 11, 2026
Chapters15
Create accounts for OMX, open router and open code Zen and generate API keys to access the services.

Tony Xhepa walks through stitching together cloud code, CodeX, Open Code PI, OMX, and Open Router to run free-model pipelines for dev workflows.

Summary

Tony Xhepa walks developers through a practical setup for multi-provider AI workflows. He demonstrates using cloud code, CodeX, and Open Code PI agents with OMX local models and free Open Router models, highlighting how to configure API keys, base URLs, and model profiles. The walkthrough covers switching between providers (OMX, Open Router, Open Code) and selecting free models like Minimax, Neotron, and Ring, with explicit steps to install CLI tools and edit settings.json. Tony also shows how to align CodeX and Cloud Code with OMX settings, including how to point to the OMX base URL and API key, and how to launch and verify different models. He walks through editing the dot-config and models management in Open Code to include OMX and Open Router providers, then demonstrates switching profiles (Quen 27B, Ring 26B) and testing assistant capabilities. The video includes practical notes on using VS Code for editing configuration files, managing API keys securely, and validating which skills each provider exposes (Open Code, Cloud Code, CodeX). Finally, Tony summarizes how to mix local models with cloud providers, using Open Router and OMX alongside Open Code to tailor prompts, costs, and latency for Laravel-related workflows and general development tasks. This is a hands-on guide for devs who want a flexible, multi-provider AI stack without paying for every model, with concrete commands and file edits to copy and adapt.

Key Takeaways

  • Install and configure cloud code CLI and Open Code Zen with API keys from Open Code and Open Router, then set OMX as a local model provider.
  • Define OMX as the base URL (openi API) and swap models like Quen 3.6B and Minimax 2.53 when using OMX in the cloud code setup.
  • Create and edit OMX and Open Router blocks in cloud code and CodeX to switch between providers without changing the codebase.
  • Use profiles (Quen 27B, Ring 26B) in CodeX to quickly swap models and observe differences in responses and costs.
  • Edit dot-config and settings.json to point to OMX and Open Router endpoints, and verify with CLI commands like 'which skills do you have' to confirm provider availability.
  • Add Open Code models alongside OMX by creating open_code.json and models.json entries, then relaunch the agent to test multi-provider prompts.
  • Verify that each provider (Open Code, Cloud Code, CodeX) exposes its own set of skills (e.g., Laravel Fortify, Graphify, Inertia) for targeted tasks.

Who Is This For?

Essential viewing for developers who want to orchestrate AI across multiple providers (local OMX models, Open Router, and Open Code) for cost-conscious, flexible workflows. Great for Laravel-focused teams and any devs experimenting with multi-provider AI stacks.

Notable Quotes

"Hello friends, Tony here. Welcome. In this video, I'm going to show you how you can use cloud codeex open code pi agent with free models using OMX using uh open router using open code API keys."
Opening setup overview and the providers involved.
"So first we're going to see OMX which is for I'm using this one for local models."
Introducing OMX as the local-model provider.
"And now if you want to change this so from cloud code to be default free model let's see I am in the root directory and here I have so let's say cda cloud okay"
Demonstrating how to switch default model sources via config.
"So in thecloud directory /ex.ett settings.json. And if I hit enter now, as you can see, we have the model using this one."
Showing how to point Cloud Code to a specific OMX model.
"Now let's create with you another profile. So I'm going to copy this and just paste it right here. Instead of quen 26, I'm going to say open router dash ring"
Creating a new Open Router profile and switching to Ring model.

Questions This Video Answers

  • How do I set up OMX with Cloud Code and Open Router for free AI models?
  • What are the steps to switch AI model providers in CodeX without changing the codebase?
  • Which free models does OMX offer for local deployment and how do I select them?
  • How can I mix local OMX models with cloud providers like Open Router and Open Code in a single workflow?
  • How do I configure API keys and base URLs for multiple AI providers in a Laravel-focused development setup?
AI workflowsmulti-provider orchestrationOMX local modelsOpen RouterOpen Code ZenCodeXCloud CodeAPI keysmodel profilesLaravel AI skills
Full Transcript
Hello friends, Tony here. Welcome. In this video, I'm going to show you how you can use cloud codeex open code pi agent with free models using OMX using uh open router using open code API keys. So first we're going to see OMX which is for I'm using this one for local models. Open router for free models. So you can go to open router.ai I and create an account. Create API key. The same thing for the open code. Go to open code and then go to Zen. So download install the open code and then go to Zen here. Login and then you can create uh API key. And as you can see here we have some models. So for example we have this Mini Max which is free. We have Neatron 3 super which is free also one and also this ring 2.6 six which is free. You have also other modes but if you want to pay yeah now I'm going to show you only for the free ones. So first we need to install cloud code. You can go to cloud code and install also the CLI codec cla sorry. And then after that so if I come here in I I am in the her example lar example project and if I say for example I'm going to say pi here and by default I'm using this quen 3.635 635 billion. Also, if I open the open code for the open code, I'm using this Miniax M2.53. Now, if I open the open the cloud code, by default, I have this Mini Max M2.5 free. I'm using the open code Zen here. And now if you want to change this so from cloud code to be default free model let's see I am in the root directory and here I have so let's say cda cloud okay if I say ls-la la here I have some uh folders and files belongs to the cloud code and you can see I have uh settings.json and on lex.json. I'm going to open this with VS code. So here is the cloud directory. We have backups, cache, downloads and so on. But we have this settings.json file. Here is what we have by default. So we have this JSON object and here I have the NV object and inside here we have the entropic base URL entropic API key and anthropic model and also enable to search to be true. For the base URL I'm using here the open code AI/Zen and also the entropic API key. So you can go to Zen here create an API key. Here I have this open cloud API key. So copy that and paste it here. For the base URL you use this open code.ai/zen and anthropic model I have chos. So go to zen models. I have chose this minimax 2.53. I have also this omx settings.json here. If we if we want to work with cloud code and with local models I have created this for the OMX. So we have the same thing as settings.json but this is omx settings.json and here I have defined the base URL for the OMX which in my case if I go to dashboard here is the API endpoints for the openi API. we have this one and for the cloud API we have this one without the / version one. Okay, here it is. The API key is 83 83 in my case. So you can find OML settings and here we have the API key also the port you can change here if you want. Next thing is the model and here I have downloaded some models. So quen 3.627 6 27 billion MTPLX also optic Q this one which is by default here and also this one I just downloaded this one but yeah I have chose to work with this one here or yeah let's copy this one and paste it just to show you. So I'm going to paste this one here or let's leave it and just I'm going to show you how you can launch now the cloud code with OMX settings. So we can say let's go to this one. Okay. And I'm going to say cloud dash settings. And then here I have pointed the OMX settings JSON. So in thecloud directory /ex.ett settings.json. And if I hit enter now, as you can see, here is the model using this one. I'm going to stop this. And now let's let's copy this model and paste it here. I'm going to save. And now let's come here and do the same thing. Hit enter. As you can see, we have now quen 3.6 6 35 billion A3 billion UD MLX 4bit. Okay. And just to prove to you that this is working, I'm going to say which skills do you have? So, Laravel system ten CSS cloud code configuration workflow loop schedule cloud API graphify init review security review and simplify. Okay, now let's show you for the codeex. If I open the codeex, as you can see by default, I have this Gwen 3.6 35 billion cloud 4.7 OUS reasoning. And now let's come here CD back and I'm going to cd dot codeex because also we have codeex and also I'm going to open if I say just lsla. Yeah, here are the files and folders for the codeex specifically. And if I open this with VS code this one now the codeex is not the same as the cloud code. It's not working with the JSON. Here we have config. tunnels file and here we have specified the model which in my case is this quen 3.6 six and the model provider I have specified the OM which is the default one. Now I have also the model providers open router here name open router base URL open router.ai/ AI/ API/ version one and N key open router API key. Here I have export the open router API key from the SSRC and then we have the OMX provider. Okay, we use OMX here. So model providers OMX name OMX base URL is this one. Now I told you if we go here we have the API endpoints for the open AI API. We have this one which is the same as the cloud code but slash just adding slash version one at the end and the key is on API key also I have exported this one which is the 8383A mistral but yeah now let me just come here and yeah you can see this is by default the opposing uh coin 3.6 six but if you want something else we have clean this I have added down there the fast and cheap coding profiles okay so you can specify profiles here and model I have chose another model which is is quen 3.6 six but now we have 27 billion optic Q 4bit model provider is OMX but you can say for example model provider uh open router and choose something else so I'm going to show you right now so first let's come here and let's say codex dash profile Gwen maybe to zoom it here quen 27. So Quen 27, we have profiles and the name Quen 27 here. And as you can see, we have now the model Quen 3.627. Now let's create with you another profile. So I'm going to copy this and just paste it right here. Instead of quen 26, I'm going to say open router dash and let me just go to open router because I'm going to choose yeah this one which is the link 2.6. So I'm going to say open router link and let's copy this model. So the model I'm going to say is going to be this one. And the model provider is not OMX but is open router. So I'm going to copy that. Scroll down and say model provider for this one is open router. I'm going to save. Now let's come here. Let's close this and let's say codeex profile not quen 27 but here we have open router dash ring so let's say open dash ring hit enter and now we have this ring 26 free let's say hi here and yeah hey there I'm ready to help what we working on. So if I say uh which skills do you have because here we are on the codeex we are not going to have the same skills as cloud code and maybe let's just zoom it out a little bit. Okay, so here are the skills. Open eye docs, imagen plug-in creator, skill creator, skill installer, fortified development which is for Laravel, inertia, Laravel, best practices, best testing, tailwind and wayfinder. So from here to here we have uh codeex skills then we have fortify and laravel skills then we have browser use and also this one are codex skills let's show you also the pi by default I have quen 3.635 65 billion cloud 4.7 oppus reasoning and also let me just yeah let's come here cd and for the pi we have the py and in thepy let's just open this with vs code also here we have the agent directory I'm going to zoom it also here we have the agent directory and inside here I have created this models JSON file. So models.json and providers I have right now only the OMX base URL is the same as the OpenAI because as you can see API OpenAI completions API key 8383 models and here I have added three models so we can change them. We have this three quen 3.6 also quen 3.6 and quen 3.6 here this is 35 billion. This is 27 billion MTP X and this one 27 OPTQ. Okay. So we can say for example here /model and yeah because I have added also the close layer workers I have here more models. But if I just search for OMX we can see quen we have also this quen 3.6 27 billion FTP lx and 35 billion. Okay, last thing what I'm going to show you is uh yeah, you can add also on the pi the same as for this one. So to use open code, but I'm going to close because is the same. It just instead of on the legs, you can add the API keys and also the base URL for the open router. Now if I open right here. So see debug for the open code. Let's see if I say open code for the open code. I don't think I have MLX here. So if I say slash models. Yeah, we have only the minimax and then only the open code API key. But let's change that. Let's close. Let's come here and let's say go to dot config. Then here we have open code. Open the open code and open the with VS code. And here we need to create a file which going to be named open code.json. And if you want you can go to open code documentation. Here we have providers and yeah you have a lama. Yeah let's lama cpp hugging face. You have all yeah open code send here. Open router if you want but let's use llama lama here also we have lama cloud but I'm going to use lama and I'm going to copy this so copy this come here paste it okay now I'm going to stop the legs and I'm going to open let's close this on quit and I'm going to open the lama let's see what model I downloaded for the yeah here is this granite 4.1. Okay. So we can go to this one and here is granite. We can open like this. So for example open code we can say launch open code d- model like this and then specify granite 4.13 billion or 30 billion. But instead of that we can say create this open code.json file and specify your lama npm AI SDK openi compatible name local options add base URL to be this one models. Now add this model which is in my case is granite 4.1 and also give it a name to be like this. Okay. Now let's see let's come here and let's just open the open code go to models and if we search for granite as you can see here is the old local. Now the same thing we can do for Alex if we want. So let's just copy this add comma here paste it change from Olama to be OMX and npm the same thing name is going to be OMX. Now the base URL is going to be what we have in my case is 8383 and also for this one we need the API key. So API key is in my case 8383 and models let's launch let's close this one now the quit the and open the legs start the server come here go to models manager and I'm going to copy this one and add it here. Okay. And now let's just come close and open again the open code and change the model. So slashmodels and search for quen we have quen uh 3.6 six and we have this which is for the from the OMX and let's say just which skills do you have here now we have different skills from the cloud code and codeex and here are the skills for uh open code you can see we have only the fortify we have graphify which I have installed and I have for PI agent for the cloud code for open code and for codex and then we have fortify inertia laravel best tidewind and wavefinder so for open code only this one is extra from the Laravel project skills okay friends that's it what I wanted to show you I think I have covered what you want how we can set up the codeex cloud code open code pi agent with different providers with uh local models with uh Zen from open code and with open router. If you like such a videos, don't forget to subscribe, like and share with your friends. All the best.

Get daily recaps from
Tony Xhepa

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.