OPENAI COOKED? GPT 5.5 JUST DROPPED (ChatGPT's Mythos)
Chapters7
Discusses the shift from raw intelligence to a more usable and personable model, highlighting the new focus and blog context.
GPT-5.5 arrives with stronger personality and usability, but access and pricing steal the spotlight in early impressions.
Summary
Income stream surfers presents a first-hand take on the GPT-5.5 drop, weighing the hype against practical access and pricing realities. The creator references watching a Matt Burman stream and notes that GPT-5.5 isn’t just about raw smarts—it emphasizes a more polished personality and a smoother workflow on your computer. OpenAI’s blog post is parsed to highlight “our strongest set of safeguards to date” and claims that GPT-5.5 understands user intent faster and can carry more work itself. The discussion pivots to real-world constraints: access remains limited for many creators, especially outside the US, and the rollout to Plus/Pro/Enterprise users in CodeEx and ChatGPT surfaces ongoing availability issues. The coder-focused angle is front-and-center, with “Agentic coding” and Codex tests showing GPT-5.5 purportedly outperforming 5.4 in some contexts, plus the possibility of faster, more token-efficient results. Pricing is laid bare: API inputs start at $5 per 1M input tokens and $30 per 1M output tokens, with higher tiers like $180 per 1M output tokens for advanced accuracy, highlighting a potential paywall even as limits try to stay generous. The host stresses Harbor SEO.ai as a sponsor and teases a live test to see if GPT-5.5 can beat Opus 4.7 on Harbor Build—a practical benchmark that viewers will be watching closely. Overall, the video blends news, skepticism about access, and real-world testing plans, ending with a nudge to upgrade and a promise of a follow-up when access finally arrives.
Key Takeaways
- GPT-5.5 is marketed as the most intuitive model yet, with faster intent understanding and heavier autonomous work handling.
- Pricing for the API is $5 per 1M input tokens and $30 per 1M output tokens, with a higher-accuracy tier at $180 per 1M output tokens.
- Codex/CodeEx tests suggest 5.5 is more token-efficient than 5.4 in internal benchmarks, underlining a potential for lower token cost long-term.
- Access remains US-centric and slow for many creators, with the host unable to get model access despite others reporting availability for two weeks.
- A 400k context window is noted as a backward step, likely a cost-saving measure rather than a performance boost.
- OpenAI’s messaging emphasizes safeguards, while the host critiques the hype around “open claw” positioning and the mythos around raw intelligence.
- The host plans to test GPT-5.5 on Harbor Build to evaluate practical improvements versus Opus 4.7, signaling a hands-on benchmark approach.
Who Is This For?
This video is crucial for developers, coders, and AI enthusiasts who want to understand the practical implications of GPT-5.5, including access hurdles, pricing, and real-world coding performance. It’s especially valuable for those considering CodeEx and Harbor Build experiments.
Notable Quotes
"GBT 5.5 just dropped and people are saying that it's absolutely crazy."
—Opening hype around the release and community reactions.
"GBT is so it's going for the open claw angle already like everything else."
—Critique of how the launch messaging mirrors other AI marketing moves.
"400,000 context window is an interesting step backwards."
—Host questions the rationale for reducing the context window.
Questions This Video Answers
- How does GPT-5.5 compare to GPT-5.4 in real-world coding tasks?
- What are the OpenAI GPT-5.5 API pricing details and tiers?
- When will GPT-5.5 be available in CodeEx and for API access outside the US?
- What is Harbor Build and can GPT-5.5 improve its performance over Opus 4.7?
- Is GPT-5.5 worth upgrading for developers already using GPT-4 or Opus?
Full Transcript
Hey guys, what is going on? Welcome to this video. GBT 5.5 just dropped and people are saying that it's absolutely crazy. I was just watching this Matt Burman stream. Uh, shout out to Matt Berman. But basically, he's had access to this model for the last two weeks. It's an incredible model, but there's something different about this launch. Not going for raw intelligence. They've improved the personality of the model. Blah blah blah blah blah. So, let's just read the blog post. We're releasing GBT 5.5, our smartest and most intuitive to use model yet and the next step forward to uh a new way of getting work done on a computer.
GBT is so it's going for the open claw angle already like everything else. GBT 5.5 understands what you're trying to do faster and can carry more of the work itself. It excels at writing. Okay, that's good for harbor. I'll probably try this out in harbor. I'm just going to skip some of this because it's all just uh open claw bait. We are releasing GT 5.5 with our strongest set of safe guards to date. So this is their this is our version of Mythos um wording. You can just see what they're going for every single time.
It's so interesting honestly. So the things that people care about obviously the benchmarks that is pretty mad honestly. That's pretty crazy. AC2.7% on Terminal Bench 2.0. Apparently Opus got 69. Uh GT got 5.4. Uh sorry GT 5.4 4 got 75%. So, I don't know what this is. This is some kind of like cuz how is GPT 5.4 better than Opus in anything? It's just not um GDP val. Sure. I don't really care about benchmarks to be honest with you. Let's just see if we've got it available in codeex just yet. Okay, so as usual, I don't have access to the model.
This is just classic. Like people have had this for two weeks and I can't even get access to it. So yeah, guys, I don't have access to this model just yet. It will probably take a little bit of time to be honest with you. Normally open route is pretty quick with it. So let's see. Still nothing from open router even which is pretty pretty mad honestly. So let's just keep reading. Agentic coding GBC 5.5 is our strongest agentic coding model to date. You you better bloody hope so honestly. Um on expert SWE are internal Frontier ELA long horizon task with the median estimate human completion time 20 hours.
5.5 also outperforms 5.4. 4. You wouldn't be releasing it if that wasn't the case. I'm not seeing anything like completely mad just yet. Um, we will kind of wait and see if anything crops up that's just crazy. Um, but yeah, nothing nothing striking me as like absolutely mad as of now. We should get this update pretty soon, I would guess. Normally, it's fairly soon after the release of the model. often Europe is way way way behind um the US mainly because of you know privacy and laws and GDPR and that kind of stuff. Again, we're seeing improvements across the board from GBT 5.4.
Um but I, you know, I need to be able to test the model for myself to actually see whether uh this is up to par or not. So let's see in codeex GBC 5.5 is available for plus pro business enterprise edu and go plans with 400k. 5.5 is also available in fast mode. So it looks like I actually have to upgrade. So we'll try that now. Okay. So this should be a paid account uh if I'm not mistaken. We'll just check. By the way guys, huge shout out to Claude for every single time there is an open AI drop.
They are there with a new drop as well. I don't even know what this is, but I just find it hilarious that every single time Open AAI drops anything, they uh they also drop something, which is hilarious. Okay, guys. So, I despite trying for the last kind of 20 minutes, I can't get access to the thing. Obviously, some people have access. It's always in the US. I don't know why we have to get left behind, but it is what it is. So GBT is rolling out to plus pro business and enterprise users in chat GBT and in codeex and GBT 5.5 Pro is rolling out to pro business you and last time they released the pro model was unusable.
I expect this to be unusable as well and the pricing was just absolutely absurd. But API developers GBT 5.5 will soon be available in the responses and chat completions API at $5 per input token million input tokens and 30 per 1 million output tokens. That's for GBT 5.5 with a 1M contest winner. This is more expensive than Opus 4.7. Very, very interesting from OpenAI. This is going to have to be an absolutely mindblowingly good model for this to be worth it. Honestly, we will also release 5.5 Pro and the API for even higher accuracy priced at $30 for 1 million input tokens and $180 per 1 million output tokens.
I don't know what the hell they are doing there. While GBC 5.5 is priced higher than 5.4, it is both more intelligent and more token efficient. In Codex, we have carefully tuned the experience to so GBC 5.5 delivers better results with fewer tokens than GBC 5.4 for most users. I don't know if I believe that. Before we continue with the video, guys, huge shout out from our sponsor, me. This is harbor seo.ai. This is a project that I've been working on for a while and it gets really, really good results for people. Basically, 224 pages published, 70,000 impressions gained, 720 clicks.
These are real clicks to real businesses, and people do absolutely love this tool. Go and check out harbor seo.ai. It's quickly becoming one of the best priced, but also best performing SEO tools, AEO tools, and GEO tools on the market currently. It does blog posts, it does topical authority, it spies on your competitors, and it shows you the actual results as well. Harbor SEO.AI, there's a link in the description and in the pin comment. And one of the tests I'm going to do of GVT 5.5 is whether or not it can oneshot the implementation of Harbor Build and also whether or not it can power Harbor Build better than Opus 4.7 because if it can, that will be something that would be very, very interesting to me.
Couple of things on this guys. Codex for a long time has had better limits for 20 bucks a month than you know Claude has had for 100 bucks a month. So if this model is even anywhere close to Claude code, then this will 100% be worth using, especially for the price, you know, you can get a pretty good amount of coding done for 20 bucks a month. I can't talk to GBC 5.5 on that obviously, but just generally speaking because OpenAI is very very behind in the race, the AI race, then people they they know that one of the only ways to get people to use them is by giving very very generous limits.
So I expect that to continue with GPT 5.5 as well, especially in the first couple of weeks. The other thing is while this model is new, go and build something. Interestingly, nothing on Open Router yet either. I guess that's because it's not out in the API just yet. I already checked inside my OpenAI playground. I do not have access to the model anywhere. I'll just be waiting, guys. But, you know, I'll make another video later today. That's just the way it is with these models. I just wanted to make this video to kind of go through the blog post and tell people my gut reaction.
400,000 context window is an interesting step backwards. I'm not sure why they do that. I guess just to save money. I guess the compute over there for GBC 5.5 was so high that they actually had to cut it down. Also, GPC 5.5 is also available in fast mode, which uh is it's in codeex if you didn't know. You can basically set it to um fast or standard. So, I can put this on fast uh GBT 5.4. I've been doing pretty well with Codeex. I haven't used it in a while, as you can see here, but when Opus 4.6 was playing up, I did sometimes jump onto Codeex.
And I do like the Codeex app as well. Although I do think Claude Code's app is significantly better. For now, guys, I have to leave the video there because I just don't have access to the model, which is just so annoying. Um, I would love to get access to the model, but it seems like I'm always just waiting around and I don't really want to wait around and it's just frustrating. But that being said, thank you so much for watching the video. If you are watching this video, go and check out harbor seo.ai. And if you are watching all the way to the end of the video, you're an absolute legend.
and I will see you probably in the next one in a couple of hours when I finally get access to the model. Thank you so much for watching.
More from Income stream surfers
Get daily recaps from
Income stream surfers
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.









