OpenCode + Gemma 4 31b = Full Apps INSTANTLY (100% FREE)

Income stream surfers| 00:12:25|Apr 11, 2026
Chapters16
The speaker discusses the impressive build quality of Gemma 4 31B, suggesting that pairing it with other strong models could enable virtually free, high-quality website development.

Gemma 4 31B on Open Router delivers near-Opus quality for free, with strong UI, fast prompts, and local-run potential on Ollama.

Summary

Income stream surfers presents a hands-on look at Gemma 4 31B, a free, 31-billion-parameter open-source model that tops many proprietary options in practical web-building tasks. The host argues that pairing Gemma with models like Opus or Gemini can yield impressive, website-ready results without a cloud bill. Open Code and Open Router are shown as the pathways to access Gemma 4 31B for free, with performance claims including fast builds and high-quality UI/UX and SEO. The video emphasizes the feasibility of running Gemma locally on capable hardware (e.g., a PC with 128 GB RAM or a workstation with an RTX card), and even on Ollama for local deployment. A real-world test demonstrates building a Next.js project from a prompt, iterating on headers, footers, and content until a detailed, production-ready site emerges. The host also vents about Google’s availability gaps inside Antigravity, but remains enthusiastic about open-source progress and the potential to create SaaS sites, Astro pages, or SaaS apps at zero cost. Throughout, the video blends praise for Gemma’s capabilities with practical caveats about translation coverage and model training provenance. A quick sponsor shout-out for Harbor SEO.ai closes out the runtime, tying the excitement to a concrete toolkit for SEO-enhanced web projects.

Key Takeaways

  • Gemma 4 31B is claimed to outperform Opus in quality, UI/UX, and SEO within Open Router, and is available for free as of the video.
  • Open Code can refresh and surface Gemma 4 31B on Open Router, enabling zero-cost coding workflows and local experimentation.
  • A practical test shows Gemma initializing a new Next.js project, then iterating on structure (header/footer) and content to achieve a detailed build with few dead links.
  • Local running is feasible on machines with substantial RAM (e.g., 128 GB) or GPU-equipped PCs; Ollama provides a path to offline usage for Gemma 4 31B in many setups.

Who Is This For?

Essential viewing for developers curious about free, open-source LLMs for web development and SaaS prototyping, especially those considering Gemma 4 31B as a local-first alternative to cloud models.

Notable Quotes

"Look at the quality of this build, guys. This is a 31 billion open source Apache license model, and it's building at the same level as much better models."
Initial praise of Gemma 4 31B’s quality and licensing.
"This model is currently free on Open Router, and I'm going to show you just how good it is in this video."
States the free access pathway to Gemma 4 31B.
"If you have a decent computer, you can run this on your local system and use it inside Ollama."
Shows local deployment viability with Ollama.
"The technical build here is perfect. There's pretty much not a single dead link anywhere."
Assessment of the produced website quality.
"I could easily run this model for free on Ollama on that computer."
Reiterates local-run feasibility on powerful hardware.

Questions This Video Answers

  • How can Gemma 4 31B compare to Opus 4.6 for web app development?
  • What are the steps to run Gemma 4 31B locally with Ollama and Open Code?
  • Is Gemma 4 31B truly free to use on Open Router, and what are the caveats?
  • Can you build a Next.js site from prompts using Gemma 4 31B and Open Code?
  • What are the hardware requirements to run Gemma 4 31B locally?
Gemma 4 31BOpen CodeOpen RouterOllamalocal AI deploymentOpus comparisonNext.js project buildAI web developmentSaaS prototypingOpen-source AI
Full Transcript
Look at the quality of this build, guys. This is a 31 billion open source Apache license model, and it's building at the same level as much better models. Now, I have a theory. If you pair this with some other really good model, like Opus or one of the better Gemini models, you will be able to build incredible websites pretty much for free. This model is currently free on Open Router, and I'm going to show you just how good it is in this video. Look at the quality of this. This is better than Opus. I'm not even kidding. The UI/UX of this is better than Opus. The SEO here is better than Opus. Do you know how insane that is for a 31 billion model? This model has no business being this good. Let's jump into things. Now, this model has not been getting that much hype, and I'm not really sure why. People Gemma 4 31 bill, 250 262,000 contacts, completely free on Open Router as of today. I will say something off the bat. I can almost guarantee this model is better than GLM 5.1, for example. It's better than Alibaba, in my opinion, and it's completely free. Not only that, but this free version right here is actually pretty fast on Open Router uh on Open Code. I just built the entire website you saw before in 10 to 15 minutes with this model. Now, one thing I have to call out though from Google. I have to say I'm super disappointed by this. It's been almost a week since this model was released. I've been checking every single day to see if the model is available here inside Antigravity, and it's still not available. They have GPT-OSS right here. They have Opus 4.6, Sonic 4.6, Gemini 3 Flash, Gemini 3.1 Pro. Where the hell is Gemma? Why is Gemma not inside here? Surely that makes sense for Google to have their cheapest model inside Antigravity. They literally have GPT open source. Where the hell is Gemma? Sort it out, Google. Honestly, I don't understand what the hell they're playing at. With that being said, guys, let's jump into things. So, first of all, you want to make sure that you're on the latest version of Open Code. So, I'm just going to write Open Code. So, it's this command right here. Open Code Models {dash} {dash} refresh. Then, you want to write that. And then, what you can do is you can actually look for Gemma like this, and just press enter a few times, and make sure that you have Open Router Gemma 4 31B IT free. This model right here is the model that you need in order to use this entire coding system completely for free, guys. There's no cost here. Zero. Absolutely crazy stuff that this is even a thing. Not only that, but if you have a decent computer, you can run this on your local system and use it inside Ollama. Now, unfortunately, my Mac is not powerful enough. The next Mac that I buy, I promise you guys, I'll get one that is powerful enough, right? But, this one here is only 24 gig memory. So, unfortunately, I cannot run the new Gemma models on it. The next time I get a Mac, probably in about 6 months, I would guess, I will make sure that I get one that is powerful enough to run these models locally. This is a huge step forward for open source, guys. I'll tell you why, because previously, these models were just nothing. They were complete trash, and only things like GLM, and also Qwen 3.6, were even close to being at I'm just going to say Opus level. It's cliche, I understand, but close to Opus kind of level. They're not Opus level, obviously, but they're close to it. They're as close as open source models get. The problem with these models is just how large they are. This model here, for example, is 397 para billion parameters, right? Which means you need 397 gigabytes of memory, as far as I understand it. I'm not an expert on these things, but I'm pretty sure that's what it means, right? Which is absolutely crazy. But, as of today, you can run Gemma 4 31B if you just have a normal computer, right? So, a lot of people will have this kind of memory, right? Um my computer upstairs, for example, has 128 gigabytes of memory. I could easily run this model for free on Ollama on that computer, right? Now, I don't, because I'm using my Mac all the time, but just so you know, if you have a I don't know, RTX 4090, probably even a 3060, 3090, something like that, you will most likely be able to run this model completely for free, right? So, all you do is you get Ollama, you get the model, and then you connect Open Code and Ollama together, and you can run this for free. Now, this is the prompt that I did. This is just a normal the school community prompt. I'll be doing more tests, more in-depth tests. I'll build a SaaS with this. I'll build an Astro website with this. But, this is always the first test I do. If a model cannot pass this test properly, in my opinion, there's no point to continue testing this model. However, it managed to smash it. Now, one thing I have to say is it did stop immediately, and it said, "I can't even find my next app. Where is it?" This is a test to see how good models are at just continuing with things, right? Because the prompt says, "You are inside a new Next.js project." But, there is no Next.js project, right? So, this is a test to see how good a model is at dealing with this problem, and whether it will just continue or if it will ask for help. So, Gemma asked for help. It said, "Look, there is no What What the hell do you want to do?" So, I just said, "Start from scratch." And I said, "Don't use superpowers." I don't really want it to use um skills. I just wanted to do like a base test. So, I said, "Don't use superpowers. Just get on with it." And then, bang. Look at this. It just started building, started building, started building. Really, really nice build originally. Really good technical build, which is one of the main things I have to say. And then, it said, "I've successfully initialized." And I didn't like this word, initialized. That means like it's half-assed it, basically, right? Which is normal. And I said, "Start the site." And then, there was a little problem where it was going to {slash} en {slash} en {slash} en, which it fixed. And I said, "Needs a header and a footer, and also just like 100 times more detail than we're done. Thanks." Really simple prompt again. Bang. Header. Bang. Footer. Okay, let's make some updates. Let's make it more detailed, etc., etc. Look at this. Bang. Bang. Bang. You could say again like, "Keep making this more detailed. Especially, make the home page wow people, right?" And then, it's just going to go. It's going to continue. It's going to keep improving. So, I will say with this model, it's not very good at just running with it and building the entire thing from A to Z completely from scratch without you prompting it a second time. But, that's fine. I don't mind that. If you have to prompt it two times, three times, it's free. Like, what do we care about bloody prompting it a couple of times, guys? Let's be honest. So, this is what I came up with, like I showed you at the beginning of the video. The technical build is perfect. There's pretty much not a single dead link anywhere, which is really rare for a model. Not only that, it's done things like I really Look Look at this. This You never see little animations like this, right? If you gave this the ability to find images online or generate its own images with Nano Banana Pro, which is really easy to do, you just add it to a skill, this would look even better. So, imagine there were actual images here, which I'll do a test on that as well. Uh let's just see if the Italian works properly. So, let's go here. Let's go here. Let's go here. Okay, Italian works perfectly. Oh, this is in English though, actually, which is a bit of a problem. That's okay. It just All this means is that it didn't translate all of the content, right? It just translated some of the content. Let's see how the Italian is. So, noleggio auto lusso per matrimonio in this place. Fine. Not bad at all. I would say this is minus points for not translating everything. I want to be completely open and transparent with people. Normally, I would expect this to have translated everything, but it didn't. This is interesting. This is a Claude thing. Claude does this all the time. So, you know, maybe maybe they trained this on Wow, look at that. Maybe they trained this on Claude's outputs, similar to how other open source or how Chinese open source did. But, look at this quality. Jesus. And the fact that it's like managed to put Salerno down here, which if you don't know, Salerno is like the beginning of the Costa d'Amalfi Tano, the Amalfi Coast in uh Campania, right? So, it's really interesting that it's got that kind of detail, that level of detail. It's got nice animations. Let's see how this looks on mobile. Let's see if it's completely messed up on mobile. It's actually not completely messed up on mobile. It's It's actually fine on mobile as well. Look at that. It's added this thing at the very top. Now, this, I'll tell you right now, this is a hallmark of Claude, right? When you When I see this, I think by coded Claude, right? So, that's very interesting that it's put that. My guess is Google has trained this model on the output of Sonic 4.6 and Opus 4.6. That's just my opinion. It's not an an accusation, not legal advice, all that good stuff, but in my opinion, it looks like that they have actually just done the thing that the Chinese models did as well, which is train their models models on the output of Opus 4.6. I don't really care though, cuz at the end of the day, this model is small enough to run on a bloody computer, a normal computer, probably your computer. If you go and look right now at how much memory you've got and it builds as well as Opus. The technical build here is perfect. I need to see how this builds a SAS and whether it's possible, but I believe it would be. Just before we end the video guys, just a quick shout out from our sponsor me. This is Harbor SEO.ai. It's a tool that I've been working on for about 2 years now, but in the last 3 months we've shipped more updates than were shipped in the first 2 years of this product. For example, you can now scan your website for site health and technical SEO issues and then fix them automatically with Harbor AI, which will literally inject metadata into your website and one-click fix your entire website metadata. Not only that, we recently updated the writer to increase the quality of the writer. Now it uses Sonic 4.6. And the pricing guys, it's Oh my god, look at that. 431 clicks. What the hell? Crazy. That's gone up like 150 in just a few days. So 107 pages published, 37,000 impressions, 431 clicks. This is getting real clicks for real businesses and helping people make sales every single day. Now the pricing is 29 a month, 25 articles a month. It's more than fair. Unlimited articles, unlimited research runs, 49 a month, 50 articles a month, unlimited sites or 99 a month, 100 articles a month. You will not get better than this product right here. I'm telling you. This is good for a SAS project. This is good for any website, any business. It can scrape anything and write context-aware content for your business. It doesn't matter what CMS you're on. It doesn't matter where you're hosted. This thing will write amazing content for you and we have huge ideas coming such such as agency, which is coming very very soon, which will basically allow you to run an entire agency in a box, write content for people, get backlinks, create pages for people and also content clusters, topical authority, spy on their competitors and much more for $29 a month or 29 euros a month. There's a link to Harbor in the description and in the pinned comment. Thank you so much for watching guys. If you are watching all the way to the end of this video, you're an absolute legend and I'll see you very very soon with some more content. Peace out.

Get daily recaps from
Income stream surfers

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.