This was a huge mistake..

Asmongold TV| 00:28:43|Mar 24, 2026
Chapters6
Introduces the reaction to Nvidia’s DLSS5 reveal, highlighting a chaotic mix of hype and backlash around its ability to dramatically alter visuals.

Asmongold argues Nvidia’s DLSS5 rollout is a misfire, overreaches artistically, and risks overruling developer vision for pixel-perfect realism.

Summary

Asmongold reviews Nvidia’s DLSS5 reveal at GTC, arguing that the tech promises cinematic fidelity but delivers questionable artistic outcomes. He points out the mismatch between DLSS5’s hyperreal ambitions and how real games are directed, lighting, and styled by artists. He cites Digital Foundry’s coverage and notes the strained PR around the faces and characters shown in demos. The streamer explains how DLSS5 operates—upscaling, frame generation, and a post-processed look that can distort characters and lighting—especially in photorealistic games versus stylized titles. He uses examples like Hogwarts Legacy, Starfield, and Resident Evil Requiem to illustrate where the tech looks impressive and where it breaks the art direction. The video also questions the practicality of DLSS5 for mainstream players who prioritize gameplay over hyperreal visuals, and it critiques Nvidia’s two-card setup and the idea that this tech will be ready for single-card use by launch. Throughout, Asmongold emphasizes developer control and artistic intent as the decisive factors for whether DLSS5 will be a benefit or a burden to games. He finishes by suggesting that future game design will need to accommodate AI-assisted upscaling rather than letting it dictate visual direction.

Key Takeaways

  • DLSS5’s post-processing can dramatically alter lighting and facial features, making characters look contoured or overstyled compared with the game’s original art direction.
  • Digital Foundry notes that DLSS5 isn’t yet practical on a single card and initially runs on a separate RTX 5090, raising questions about real-world usability at launch.
  • Developers retain control over how DLSS5 is applied, but this control is often insufficient to prevent unwanted shifts in style and lighting in many games.
  • Hogwarts Legacy and other photorealistic titles show how DLSS5 can overcorrect textures and wrinkles, producing uncanny results in faces and fabric alike.
  • Fans and critics argue that the most successful games are those designed with the technology in mind, rather than retrofitting it after the fact.
  • Gamers who favor performance-first titles (Fortnite, Roblox, Minecraft) may not benefit from DLSS5’s cinematic ambitions, limiting its mainstream appeal.
  • DLSS5 risked overselling early capability at GTC, with the consensus being that the tech is impressive but not yet ready for broad, artistically safe deployment.

Who Is This For?

Essential viewing for PC gamers, indie and AAA developers, and hardware enthusiasts who want to understand how AI-assisted upscaling affects art direction, lighting, and game design beyond raw frame rates.

Notable Quotes

"Total crash out. Massive, insane crash out. Once again, Nvidia has revealed the future of gaming and graphical fidelity. And once again, audiences are rejecting it."
Opens with the core criticism that Nvidia’s reveal failed to land with viewers.
"The faces and the character models look like [ __ ]. It’s rough because Cat spent a lot of time making Leon and Grace look the way that they look."
Highlights perceived misalignment between DLSS5 outputs and character design.
"DLSS is basically changing up lighting, colors, tones... and those devs are in many cases singing the praises of the technology."
Notes how DLSS5 is framed as developer-friendly, yet practical results vary.
"If you have a veneer of hyper real over something that itself is not hyper real... it looks really weird."
Describes the uncanny effect when photoreal post-processing clashes with stylized visuals.
"Developer control is king and that's the problem with DLSS5; it does not let them do that quite clearly."
Emphasizes that post-release control over tech is insufficient for artistic intent.

Questions This Video Answers

  • Does DLSS5 actually improve gameplay performance, or does it primarily affect visuals in ways developers may not want?
  • How does DLSS5 impact artistic direction in games with strong styling like World of Warcraft or anime-inspired titles?
  • Why did Nvidia showcase DLSS5 at GTC with a two-card setup, and when will single-card support be available?
  • Can game developers design around DLSS5 to enhance rather than erode their art direction?
  • Which games currently demonstrate DLSS5's strongest or weakest artistic results, and what can we learn from them?
DLSS5Nvidia DLSSAI upscalingDigital FoundryJensen HuangRTX 5090Hogwarts LegacyStarfieldResident Evil RequiemGaming graphics fidelity
Full Transcript
Total crash out. Massive, insane crash out. Once again, Nvidia has revealed the future of gaming and graphical fidelity. And once again, audiences are rejecting it. It's feeling well beyond fake frames. Now, what Nvidia wants their flagship topofthe-line cards to be able to give you is fake characters, fake faces, and rendering that utterly destroys any and all artistic direction from gaming. The internet has immediately rejected us. There are endless memes, fiery criticism, and even development partners trying to downplay their usage. As far as feature rollouts go and announcements go, this one has been a complete godamn disaster. Now, of course, there is I actually think it's the worst. It is probably the worst feature roll out they've ever had. Technical nuance in this story. We'll get to that. It's really bad. Well, let's talk about Nvidia's very bad week for PR. You know, Nvidia had actually been having a few real good weeks until now. You see, a recent report confirmed that last year they made up 94% of all shipped graphics cards. And their competitor AMD was down to only 5%. And got the 5% right here. Got [snorts] the 94% right here. Got both of them right next to me. What we see is that even as the overall GPU market is trending down, at least partially because of high prices, Nvidia is still set up and is still dominating while there. And of course, that's before you think about all of their AI revenue, which is vast. going into their annual GTC conference, they confident CEO Jensen Huang told the audience that they believed the revenue opportunity for custom purpose AI chips could be at least $1 trillion in the next year. But there was something Nvidia didn't have at the conference any type of new gaming GPU. Instead, they had a new version of DLSS. And DLSS has it and brilliant use cases for performance. But now Nvidia have decided the DLSS is they wanted to make your games look better. There's going to be a lot of deep problems here, but I think we need to talk first about how any of this [ __ ] even works. DLSS was revealed all at once via blog posts, comparison videos, and a showcase with beloved and trusted tech gurus Digital Foundry. Digital Foundry have certainly found themselves caught in the middle of this one. Now, here's how the thing works. Number one, it's really all about photo realism. You see, in the first incarnations, DLSS simply upscaled from a lower resolution. That essentially meant that you would get better performance. Yep. And then they added in frame generation more frames so that even if say your game engine is spitting out 60 frames a second or maybe 30 frames a second, they can generate a bunch of frame between frames and get you. I've always been a pretty big advocate that I think that frame gen and that kind of stuff is actually good for gamers. I do like I know some people like oh they hate this I don't think it's a good thing like I hate this I don't ever use it. I think that the average person does use frame gen and it improves their experience. I I do I think the average person but like for a game like for example Spider-Man 2 it's not a big deal but you don't want to use this on Valorant or Counterstrike do you? I think that's clearly where the difference is a higher frame rate. Now, their new model is going to quote transform visual fidelity in games. All [music] in the service of quote bridging the cinematic gap because we all want our games to look like movies. Now, I say that because not all movies look the same. Movies have lighting artists. Movies have cinematographers. And a huge point I'd make is the generic photo reel would actually look terrible in film. That is not how films are actually shot. Lighting is faked and fudged all of the time and it's done not to get around a problem, but actually it's done with creative intent. But let's talk. A great example of this that Belure could could use is like if you look at a lot of old movies like for example like I don't know like a Charlton H movie from like the [ __ ] 60s or something like that. The night scenes are brightly illuminated. They're very brightly illuminated. Whereas now a lot of night scenes in movies are actually extremely dark. And the reason why they were illuminated is because there was like a cinematic goal for that. I think that the other really good example of this is this. Oops, not this one. There we go. This movie here, Excalibur, right? I mean, obviously like there's a certain amount of like uh grandeur to it that you know, yes, it's real. It's real people doing this, but at the same time [snorts] it it it's like a certain amount of like fantastical nature as well about how this works. So every frame is if you make that super photo realistic, it would be weird, right? You know, shader, right? So your texture sits and your game engine spits out a frame. Then Jesus, that was a real time neural AI model looks at that frame. It analyzes the color and motion vectors, materials, lighting, etc. And once it understands everything in the frame, it then goes to do its thing. Right now, that basically means changing up lighting, colors, tones, all of that stuff. I don't want to say that this is like a direct filter. It is a tool that is directly interfacing with the model as presented by the devs. So you can have things like correct modeling on I do think that DLSS like the tool that they have I think that it's definitely there's a use case for it but right now it's just too it's too strong like basically like the opacity should go from like 90% to like 30%. And I think it would be a lot better. Rain, right? When it hits skin, you can have light correctly fold around all the parts of maybe too aggressive. Yeah. All that kind of thing. And what it certainly does do is it makes things look way more detailed. The materials do just look like they're extremely highresolution photoggramometry basically. And for the most clear showcase of this, just look at the Zora tech demo. It shows it best with an environment that is perfectly set up to showcase this technologies strengths. So when it's applied within that tech demo, where, you know, we're leaning everything towards the strengths of this, that's great. The art direction of that demo matches what the model can output. Now, where it gets a little bit more tricky is when we start applying it to games that were not made with this thing in mind. Exactly. That's where things get real rough. So in video, and this is another thing that that what they should do is they should if they want to actually have this system be good. You should have different types of functions from this system similar to how you have different colorlind modes. So, like another parallel to this is if anybody played Weathering Waves, a lot of people play Weathering Waves with like a contrast filter on because it generally like the game at a baseline is like very bright and if you turn the contrast on a bit more and you turn the the if you darken the screen a little bit, it looks a thousand [ __ ] times better. And I think that really like it's a matter of kind of being able to calibrate that in a way that makes it look good to you. Some of the games, some of the more photorealistic games, the likes of Capcom, 10 cent, way more. And they have examples of applying DLSS5 to many of the optional. Yes, it is. Now, they say the devs have complete control over how and where it is applied. They can set the levels of intensity. They can make sure the color grading is maintained. And they can even mask off certain elements of the frame or the models so that the effect is not applied. And those devs are in many cases singing the praises of the technology. Nvidia took comments from Todd Howard of Bethesda, Charlie Gilmo, of Vubisoft, and Jun Takuchi of Capcom on how this is about pushing cinematic immersive experiences. Except that I also think that a lot of people don't want to have things be completely photorealistic. I think that you can look at look at how popular anime is now. Like anime girls are not real. Unfortunately, they're not. I know. And the truth is that a game that has like a certain type of style to it like uh you know like World of Warcraft has a style to it where it's like some of the cinematics are kind of photorealistic but the people in the cinematics are not really entirely human. And I think that Riot has the same approach to that as well. Realism is boring. Yeah. I think that whenever you make things onetoone make them as realistic as possible. I think in almost every circumstance it ends up being boring. The reveal those companies are now all finding themselves on the defensive and that's because the faces and the character models look like [ __ ] Okay, quick reality check on how YouTube production actually works. For every shot that you shoot in a project, there's tons of B-roll, transitions, and visual aids that you often can't or don't film yourself. And that is where today's sponsor, Story Blocks, fits in. It's one subscription that gives you unlimited downloads of stock video, music, sound effects, and speaking of a com like I I think this I can't see how these people are going to be around in 5 years. Like if people should be able to AI generate this, right? After Effects templates and I think those templates are a real thing for anyone who's trying to make their channel look and feel better. You'll get clean lower thirds, data visualizations, map animations, the kinds of things or days to build from scratch. You can just search for, customize, and drop into your project. It's even a great way to level up your own skills cuz you can see how those templates are built now. We've used them before. I've kept my subscription for years. And it's just good to know that whenever a filming, sound, or editing problem comes up that you have a huge library to back you up to help you solve those problems. and where everything is licensed. I wouldn't want to lick him. No copyright strikes, demonetization surprises, or hunting through free sites hoping to find something that is legit and won't throw up a copyright strike. So, if you're making content and you want to punch way above your weight on production, check them out storyblocks.com/belly SVG. Try it. You'll see. I don't know. The internet's not being kind to the faces that Nvidia are showing off as a part of this reveal. Now, the very first person to show off is Grace Ashcrooft from Resident Evil Reququum. Now, Reququum is a game that's already graphically very impressive. Look at some of the lower quality stuff like yeah, you can have some real just bork looking lighting. I'll say that I played this with the performance settings on a PS5 Pro. I found the game to be quite visually stunning and one of the greatest things was actually the implementation of lighting. Just artistically absolutely just on fire. Brilliant job. So, Recquam already looks fantastic, but then when you look at this comparison image, you see that it doesn't appear to just be lighting. Grace's cheekbones become more defined, her lips become fuller and get stronger lipstick applied, and her eyes become bigger. Hell, they even managed to make Leon Kennedy look bad. That is obviously a crime that we all know should be punished with immediate execution. And it's rough because Cat spent a lot of time making Leon and Grace look the way that they look. So, you could say that this is a disservice to them. When you look at it closely, it is undoubtedly way more detailed. And sometimes that detail does actually look really impressive. And I'll say that in the speific there's another big issue is that like I think that a good example of this is like if you guys ever played classic WoW do you guys ever noticed like whenever certain things that are outside of that time period like for example like you know like the Burning Crusade like deluxe edition mount like it didn't really fit inside of the Burning Crusade aesthetic because it was just so much higher resolution and so much higher polygon that it kind of made everything else look bad. Or like if you ever turned on the water in like classic WoW where it's like the high quality water and then you compared it to you know the low quality water then it looks like it basically it makes the trees and the grass look worse because of the juxtiposition right whenever you have you know 2026 level cinematic quality water and you have it right next to 2004 level uh triangles. It looks really bad. And whenever you have all the shitty water though, it actually makes the entire experience look better as a whole. I think this technology is remarkably impressive. The problem is here it it's just not artistically applied. And that's fundamentally because the outputs of DLSS5 will be the outputs of DLSS5 as its model is trained. It is from what we understand trained off kind of like super high detail um kind of like photoggramometry stuff. So no wonder it looks amazing in the Zora tech demo. But what you basically find is lighting that's trying to be as realistic as as possible. Well, it's also like it's making her hands like this is another issue, right? Is like you're fundamentally changing it. Like you can see clearly I guess actually maybe you guys can't from this like the the knuckles and the hands. It's making her look different. I think that's the issue is it also like lighting can also imply like for example like women contour their faces to make their faces look thinner. And I think that effectively DLSS is contouring in a way that distorts the image of the characters. Plus a lot you'll find in these models to grace suddenly [music] having more. What you're basically finding is that the stylized lighting simulation artistically crafted and applied in the game engine is having this layer of let's make it photoreal applied after the fact such that the characters are now lit in a little bit of a strange way that feels different to how they should feel. And that's because DLSS5 is not just applying some magic filter over the top of a frame. it will have been trained on many examples from other media etc etc etc. Now it's not a case that the model says oh that's a woman go into the Nvidia cloud and pull down a little PNG of some lips and put them on her. No, it doesn't work like that. All of this stuff is in the weights of the model. It's not going there retrieving these assets or anything. But I suppose you could say that the outputs of the model do approximate that approximate being a very very key word. I don't want to get the technical stuff completely backwards which is quite easy to do cuz all this stuff is kind of technical. But to put a long story short, if you have a veneer of hyper real over something that itself is not hyper real because it's a simulation, it looks really weird, then things are going to feel off a little bit wrong to the human eye. And that is Yeah, cuz it just it it's it's distorted. That's the reason what's going on. That is why every scene that Nvidia have shown off with characters that are not in the Zor attack demo basically look weird. Specifically, they look like the characters pop straight out of the screen. But actually, the limits of the models training do go further than that. Hogwarts Legacy gives us an amazing example of how this tech isn't fit for purpose right now. Because as soon as DLSS 5 stops being applied to classically beautiful photorealistic models, such as, of course, myself and Leon Kennedy, which is presumably what the model was trained on, well, then it overcompensates uh like wild and it just has to make stuff up. And that's why you get things like the slightly cartoony style of Hogwarts Legacy leading to uh odd results. Look at this young man. His clothes gain texture and he gains about 20 years. All because the materials on his face have this hyperexaggerated photoreal look. It's like what I was saying with the contouring. Like you can achieve this look through makeup. And basically what it does is it just applies that makeup look onto everything. And I think that it creates a lot of like very unintended consequences, especially with like characters faces. He looks like a 30-year-old cast in a teen drama, which is quite funny, but not what we're going for. Take a look at this older woman. In the original, the scene is lit by the wand from the right side. And her model matches the game's art style while clearly showing her age. But with DLSS5 on, her scarf and outfit achieve higher fidelity. They look real impressive. But her face is absolutely overridden by wrinkles. She looks like she's made of plastic. And the directional lighting is kind of gone. Her face now appears like it's lit from all sides. Even if it appears lit from all sides in a way that the model finds to be realistic and perhaps that even is realistic. That may not be the artistic intent. There will be many times say making a film where uh the natural light actually might be a little bit of a problem and you might want to put up I don't know something to block that light or diffuse it a little bit. The thing is pure reality. Well, there's a lot of things especially like in shots that are in front of a like let's say there's a person standing behind a sun or behind a light source. If you don't fix that light in like post-prouction or you add that light artificially, it will totally wash out the shot and made it completely unusable. So like even though you might think that oh this would look great, it definitely doesn't. And a photoreal art style are actually different things. And what that means is that loads of existing games, maybe all existing games, will quite simply look like ass whenever something like this is used. At least for close-up things like characters where we really do see the uncanny valley. It's a little bit different for big environment shots. Some of those, I will admit, do look quite impressive indeed. Now, the best of all, I think, has got he's mogged out. Got to be Starfield. Oh man, those faces have gained a lot of detail. All sense of directional lighting is gone. A man with a baseball cap seems to have a ring light pointed at him because his cap no longer shades his eyes. Over in Oblivion Remastered, we see facial shading from sunlight that is above is kind of gone in some places. And in fact, it seems that this new tech kind of just deletes a lot of shadows from loads of the games that it's used. I think this is actually a really good point. I never even noticed this that basically it just completely removes the lighting sources. And those shadows are a part of the artistic intent. So that's just gone, wiped away by an implementation of this technology. And even Digital Foundaries relatively optimistic coverage suggests people will have trouble with how this overrides artistic intent and whether this tech has anything to offer games that have style rather than just pure fidelity. Cuz of course, more detail is not better necessarily. often in pieces of art. I mean, literal art that someone might paint or a frame in a film, you'll find less detail in places, not because they're lazy, but because they're trying to guide your eye because they're artists, and that's kind of their job. And with that in mind, we really see how every one of these shots is just kind of proof that Nvidia's model doesn't understand those differences. But, you know, I think that one of the audience look, some audiences, one of the biggest examples of this is Elden Ring. If you actually look at the pixels and the design of Elden Ring, like you zoom in on somebody like Godfrey, you zoom in on Godric, you zoom in on, I don't know, Fire Giant, the characters look like garbage. They look like absolute [ __ ] But if you actually look at them like in the lighting, inside of the environment, and everything fit together, then it looks really good. who care about graphicality and pretending otherwise is silly. People love to see a new game with flashy graphics that they can believe in. It's how the biggest devs in the world sell their games. It's how Watchd Dogs had all the Watchdog stuff happen. It's how the biggest devs in the world sell their games. Crimson Desert's hype for the release this week is at least in part because it's a AAA open world game that looks goddamn good. But Nvidia are kind of telling on themselves with this technology. You see, it'll only be available to people with 5090s. And more importantly, as Digital Foundry explain, this is a slight issue. It doesn't actually work yet. You see, the whole DLSS5 process was running on a separate 5090. There's two 5090s to run this. Yeah, I don't know about this one, guys. The one running the game. Now, if you've got um a little bit of experience with say local AI inference, that will not be a surprise to you. Now, Nvidia says they'll get this working in a single card by the time it releases, but basically this tech was not ready to be shown off. It seems to have been rushed out to ensure there was something for Nvidia to show at GTC for gaming. Something to fire up audiences for what's next in graphical fidelity. Yeah, it sure fired them up all right, but unfortunately for them, audience interests are actually changing. A recent news report shows that the next generation of gamers don't have any overlap with the kind of hyperraphical games that Nvidia wants to sell cards with. They don't need That's right. Yep. need to spend two grand on a graphics card and $70 in a game to get the experiences they want because they're Fortnite, Roblox, and Minecraft. Most current game because they're just stupid little kids. They're stupid little kids that want to play a game on a tablet. They're not they don't need the game to have a million graphics. Like they're not even going to understand it. Yeah. I mean, I feel like that's how I was with games back in the day is like I never got super involved with graphics. I didn't care about that. I just wanted to play the game. I think that's the way most people feel. I'll be right back. Give me a second. All right, I'm back. Give me a minute. Ah, excuse me. Now we're back. We good. Sorry about that. are not playing those big hyperreal games. And it's not because of a lack of availability. Like always, most of this round of marketing won't really touch the games that you play in a meaningful fashion. All of this is Nvidia showing us the absolute cutting edge of what they think their tech can do. And you know what? I think a lot of people can appreciate the photo realism and like the really good graphics of something like Expedition 33, Red Dead Redemption 2, like back in the day. I remember whenever Final Dude, the biggest one ever in my opinion was Final Fantasy 10. I remember when that got announced and I was just like, what the [ __ ] Like that was a whole new level. You think Crisis? See, I I don't really got involved in that, but for me it was it was Final Fantasy 10. That was the big jump. Have done some really cool research here. They've made some really cool technology. It's just not really applicable to the artistic side of games. And because of that, it feels massively oversold. And that means it just all feels untrustworthy. As a general rule, the first version of a technology tends to be a little bit of a disaster. I mean, cool, shiny, impressive, maybe even inspiring, but often quite evidently not quite there yet. We're seeing this happen here, and we've seen it before. It's the same thing with, say, the ray tracing back when Nvidia and EA collaborated to get it into Battlefield 5. The pitch back then was that yes, it may tank some of your performance, but man, it'll be worth it for the graphical effects, and the demos were pretty, some of the graphics were pretty, but back in 2018, the ray tracing was kind of rough. It was graphically expensive, and most people turned it off because they wanted to play their games, not purely look at them. But now things happen. I mean, like the thing is that whenever you're playing a competitive shooter, you don't want to have the lighting distracting you. You don't want to have that stuff happening. Like again, like it's just that's the wrong it's the wrong genre. Like you should be putting this in games that are not reactionbased games actually serve their games rather than just slathering it across every surface. And you know what? Resident Evil Reququum is one of those games. And what I just said there is kind of the main point. In 2026, the fastest way to get better performance in Resquay and path tracing settings, not because those things failed, but because developers figured out how to implement them on their own terms. So, dev implementation is king. And I think that's one thing that a lot of people forget is that like Kingdom Come: Deliverance 2 is a great example of this where it's a game that is optimized incredibly well and because of its great optimization, it plays on a performance that like people wouldn't expect. I think that also like you have on the opposite side Monster Hunter Wilds. Holy [ __ ] what a [ __ ] that was. And then you also had uh now Crimson Desert. Crimson Desert has a lot of flaws, but performance and uh optimization seems to not really be one of them. Now, I I I I think that I've encountered a couple of slowdown areas. One of them, actually, I think I've encountered one of them. But in a general sense, Dragon Sagma 2 had the same slowdown issue. It did. And uh cuz it's not Unreal. Yeah. Well, the problem is that like you say, oh, Unreal is the problem. I don't agree with that value. I I don't agree with that assessment. Blackmith Wukong was on Unreal. I think that was fine. and so was Expedition 33 and that was fine too. I think that it's a matter of it being optimized properly. And whenever you look at, for example, whenever you look at the way that a developer optimizes a video game, you see the amount of tricks that they play on you where it's like everything that's not in your direct field of vision actually despawns, for example. Like that's the way some development uh some games work. velopers having artistic control over the technology is king and it always has been and that's the problem with DLSS5 it does not let them do that quite clearly that is the case when you look at these demos it takes the developers finished frame decides what it should look like for some games that may look better but in many cases it quite clearly does not I think that the the other big problem is that it upscales faces in a much higher ratio than it's upscaling clothes and other parts of the environment. So, it makes the faces look out of place when it makes the rest of the game look bad because it's what I said before like about the juxosition, right? Where's like one thing that like really stands out because it's like way higher quality because the ML inference is not down to the dev team. It is down to Nvidia's model and every piece of media fed into that and how it's trained, how it's post-trained. And the problem is that can override the vision of the people who actually made the game and that can homogenize what the player sees. And that's why there's been such an immediate push back against the tech. It feels offensive to artistic direction. It is. But hey, it's not offensive. It's artistic eraser. Like you're erasing the artistic direction by replacing it with an algorithm. That's what you're doing. And I think that in older games, like very old games, whenever you upscale them, I think you can have interesting and like funny interactions. And sometimes it looks cool whenever you upscale something that's like really low quality. But in a general sense, I don't think it's really something that you should do. And I think that what's going to happen in the future is you are going to have games that will understand the way the upscaling technology works and then design in a way that allows the upscaling technology to further upscale their artistic vision rather than just not designing for it at all. I think that this will just become another vector of design that they're using to effectively futureproof their games. That's probably what's going to happen. Ray tracing took 8 years to become something devs could use. Well, maybe 8 years from now, Nvidia will have found another advanced we won't need to think about. It's a tool wanted at all. Okay. Well, um, good luck in the stock price, Jansen. It's going well for you. Best of luck. Fantastic. We love it when the numbers go up. And you know what number could go up? The price you'll pay for a game in PlayStation when it's on sale as compared to one of your friends. This isn't some niche experiment in some random part of the world. No, it's happening in America as well. You can learn all about it in this video next. Gaming is a national security threat. Wait, what the [ __ ] What the [ __ ] No. No, it's not. What do you mean? Let me go back. I'll link y'all the video. Obviously, I've watched Bellor's videos forever. Parala Bellor

Get daily recaps from
Asmongold TV

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.