I've Made a BIG Mistake

Austin Evans| 00:13:39|Apr 30, 2026
Chapters5
The speaker weighs the appeal and downsides of smart Ray-Ban glasses, praising their usability and features while noting social frictions and a candid stance on past sponsorships, leading to reflections on wearability and day-to-day practicality.

Austin Evans tests Even Reality G2 smart glasses, weighs privacy risks vs. usefulness, and questions if Meta’s privacy controls are enough.

Summary

Austin Evans dives into the world of smart glasses with the Even Reality G2s, clearly not sponsored but driven by genuine curiosity. He walks through the hardware—the heavy case, the dual displays positioned near the temples, and the prescription fit—before explaining why these glasses feel uncomfortable and underwhelming for everyday use. He compares them to Ray-Ban Meta Glasses, noting that Ray-Bans still deliver the strongest feature set for his needs. Evans also tackles privacy head-on, detailing Meta’s public assurances and the tricky reality of AI review processes, including a pivotal interview with Meta’s VP of AR, Alex Himl. He calls out the potential privacy pitfalls: an always-on camera, cloud-sent prompts to Meta AI, and the ethical concerns raised by external contractors. Throughout, he pushes for real, user-facing privacy protections—an opt-out that goes beyond turning off wake words, and a physical camera shutter or kill switch. The video culminates in a nuanced take: smart glasses are incredibly useful, but only with the highest privacy standards and trustworthy leadership from the companies building them. If you care about what wearable tech knows about your daily life, this episode is essential viewing. Evans also tees up other options like HTC Vibe Eagles and Samsung’s glasses with Google for future privacy-focused competition.

Key Takeaways

  • Ray-Ban Meta Glasses offer strong everyday usefulness, but privacy controls and data handling remain a core concern for users in real-world wear situations.
  • Even Reality G2s deliver basic notification functionality but lack robust audio and complete feature integration, making them insufficient as daily drivers in Evans's experience.
  • Meta AI interactions can be reviewed by humans, raising privacy risks when a person could hear or read what you say to the AI, despite the device not transmitting photos/videos by default.
  • An LED indicator and explicit on-glass indicators help signal when recording, but the broader privacy risk persists when AI features are involved and could be cloud-based.
  • Alternatives like HTC Vibe Eagles promise stronger privacy posture but are geographically limited, delaying a global rollout and practical comparison for many buyers.
  • Price and design choices matter: Evans spent about $758 for the glasses with prescription + $100 for sunglasses attachable frames, highlighting total cost as a barrier.
  • Industry pressure and competition (Google, Samsung, HTC) are seen as necessary to drive Meta toward stronger privacy guarantees and user trust.

Who Is This For?

Tech enthusiasts weighing the real-world trade-offs of smart glasses, especially those who care deeply about privacy, data handling, and the ability to trust wearable tech in daily life.

Notable Quotes

""No, Meta does not have access to any photos or videos you take on the glasses under normal circumstances. Those stay on your glasses and phone until you decide to post them.""
Meta's privacy claim is central to Evans's discussion of data handling.
""If people don't trust these glasses, they will be a failed product.""
Alex Himl’s line underscores the trust-first requirement for success.
""The screen is kind of up above where your ey e line is.""
Illustrates the misalignment between display placement and natural gaze.
""Lack of camera is not something that really bothers me that much, but my big problem is a really simple one. I don't think these glasses do enough to justify their existence.""
Evans’s core critique of the Even Reality G2s.
""There is a physical power switch on the glasses... you can just fully shut them off""
Noting a potential safety/privacy control feature.

Questions This Video Answers

  • How do smart glasses like Ray-Ban Meta Glasses handle privacy and data when AI features are active?
  • What are the practical privacy controls available on Meta’s smart glasses today?
  • Are there real alternatives to Meta’s glasses that prioritize privacy and local processing?
  • Why do smart glasses still struggle with comfort and all-day wearability for most users?
  • Will HTC Vibe Eagles or Samsung Google glasses become the privacy standard in 2026?
Smart glassesMeta Ray-Ban Glasses privacyEven Reality G2HTC Vibe EaglesSamsung glasses with GoogleMeta AI privacyLED recording indicatorswearables privacy policy
Full Transcript
Last year, I was in a meeting with a company who was showing me some things under NDA. And after the meeting, someone pulled me aside to say, "Hey, you're not recording with those glasses, right?" And look, I kind of get it. The whole idea is that these are supposed to look like normal glasses that just so happen to have a camera right here. It's someone who wears glasses anyway. The pitch is so good. These have transition lenses that automatically tint when I step outside, which is killer. They also have legitimately really solid audio when I'm wearing them. They really just replace headphones for me pretty much everywhere except maybe a loud environment like a plane. And yes, they do have a camera. Now, I will say that getting POV photos and videos when I take the family to a theme park or when I want to take a photo of the kids without pausing to break the immersion and pull a phone out, like look, I understand that a lot of people think that smart glasses are a gimmick, but they are legitimately one of my favorite pieces of tech I use on a daily basis, but lately I actually haven't been wearing them all day. Sometimes I get to school pickup and I just switch to my regular glasses because I don't want to deal with getting weird looks or questions about my fancy AI glasses. Now, I should say this upfront. I have done sponsored work with Rayban in the past about these glasses, but as I think it should be pretty clear by now, this video is the farthest thing from sponsored. I legitimately like this product, but lately I've I've had to second guess that a bit. So, my first step is the obvious one. Surely there are other smart glasses out there, right? And you better believe there are. Not only are there noame options like the ones we recently tried from Amazon, but other big companies have options, including these are the Even Reality G2s. Now, let me be very clear. You may have seen these before. However, this is not a sponsored video. Now, to be fair, they do have my prescription in them, which slowed things down if I was ordering it without a prescription. If I was one of those seeing guys, then I would not have had a problem. Everyone knows you're not seeing guys. Yeah, I'm not seeing guys. You're right. I'm Goodbye return policy. Oh, that was a nice little You see how it just opened? Wait, I did it again. That was good. Ooh, that case is hefty. Wow. Ooh. Oh, that's a nice little mechanism. So, you can almost kind of see where the displays are. So, there's one right about here and one right about here. I will say those don't look like smart glasses, right? Do I look like a giant nerd? That was a joke. I was asking a question. Like, don't ask questions you don't want the answers to. So, I actually did pay the amount of money to get these in prescription. So, um you keep saying that. How much have you paid? $758. $700. And then I spent another $100 on the sunglass attachment. So, as you can see, not a sponsored video. This is worse than a sponsored video cuz you spent that much on it. No, no, no. Because I bought them because I'm legitimately curious about the product. It was at this moment he knew he Okay, so I have been using the Even Realities G2 for about a week now and I'm pretty much done with them. I find them to be very uncomfortable. First and foremost, these things between the weight and the clamping force are not comfortable. They're like little like knots on the sort of the little top part of my ear. That aside, my big issue with them is actually a really simple one. They don't do anything. You get notifications on these glasses, right? That is the part that works well. WE ARE BACK YET again to give Salvation Army another try. Why do I get notifications on my glasses? Jesus Christ. Innovator dies. Wait, what? Huh? What? The screen is kind of up above where your sort of ey line is. Beyond notifications. I don't find a ton of use for these. I think it says a lot when you double tap the little button to turn on the screen. The first thing you see is news followed by stocks. Now, there are other features. So, there's a teleprompter mode. Oh, it's disconnected from my phone. Great. I can't even show you the prompter. The prompter doesn't work very well. Long story short, because if I'm reading the screen, this is my ey line. And I'm pretty sure you can tell that I'm not looking at you right now. There is a to-do manager, which is one of the main features of the glasses, but it doesn't sync with to-d doist, the app that I actually use. The thing is, though, I cannot believe these don't have any kind of audio. Like, I didn't realize the glasses were disconnecting from my phone. Yes, I could always put like a pair of earbuds in, but the idea of the glasses just sitting on your face is such a clutch feature. I mean, this couldn't be a speaker. The lack of camera is not something that really bothers me that much, but my big problem is a really simple one. I don't think these glasses do enough to justify their existence. Look, the even realities are fine for what they are, but I went looking for an alternative and just ended up being reminded of how good the Ray-B bands actually are for what I care about. Now, there are other options out there, such as the HTC Vibe Eagles. On paper, these are exactly what I'm looking for. pretty much everything the Ray-B bands do, but with a bit more emphasis on privacy. But the catch is that these are only available in a few countries in Asia at the moment. Now, if I could buy them here without paying egregious eBay import pricing, I honestly would give them a shot. They're supposed to see a global release later this year, but until then, not really an option for me. So, to recap, I'm stuck with the product I love from the company I'm not sure I should really trust. Cool. That's that's great. So, if I can't leave, I figured it was worth it to at least take a closer look at what we've all agreed to when using these glasses. Not exactly like it's a huge surprise. There could be some privacy concerns. Now, I don't want to beat up on Meta specifically. Plenty of other companies have, shall we say, less than stellar track records when it comes to privacy, but with some of the issues in the past, I actually think that Meta does kind of need to be held to a higher standard, especially when we're talking about a pair of glasses that you're wearing on your face all day. Recently, a story surfaced that made me actually go and read the Facebook terms of service. It's a really fun bedtime read and specifically looking for the content around AI as it relates to the glasses. Let me actually read you a line directly from section 2C. In some cases, Meta will review your interactions with AIS, including the content of your conversations with or messages to AIS. And this review may be automated or manual, parenthesis, human. Maybe I just don't want humans looking at the stuff I do while I'm wearing my glasses on my face all day long. So, I actually reached out to Meta to ask some questions about this. And to their credit, they actually did put their VP of augmented reality, Alex Hammel, on the record for me. He was pretty straightforward. No, Meta does not have access to any photos or videos you take on the glasses under normal circumstances. Those stay on your glasses and phone until you decide to post them. the same way that Apple doesn't have access to your photos in the camera roll unless you upload them to like iCloud or something. But he also did confirm that yes, if you use Meta AI, then some of your interactions could be potentially reviewed by a person. But wait, what about all those links? Yeah, I want to be clear because I think that this is very confusing at first glance and more so because people don't necessarily know how these things work, right? So, let me do a little example. I'm going to go ahead and start a video. So, I am now recording on the glasses. Now, as you should see here, there is a pulsing LED. Now, Meta built this in to make sure that not only other people around you know that you're recording, but there's also a little LED on the inside of the glasses where I can see it as well. This is pretty straightforward. This is the way that privacy should be done if I'm taking photos or videos. A big old LED letting everyone know. You should also let people know verbally, but the glasses are doing some work here. However, it's a little bit different if I want to use Meta AI. So, for example, if I say, "Please tell me all you know about Austin Evans the YouTuber." Austin Evans is a tech YouTuber with over 5.6 million subscribers and he's considered one of the top tech influencers today. So, what I just said there when I triggered it, that audio that was sent to Meta AI, that's the kind of stuff that is sent to the cloud and potentially could be reviewed by a person. So, when I asked about this, compared it to the way that most AI services work, and he's actually not wrong. If you've ever used ChatGpt, Gemini, pretty much any AI tool. Some small percentage of prompts are reviewed by humans to make sure that the AI is getting things right. But I think this is a little bit of a different thing. With ChatGpt, you're typing a prompt on purpose. I mean, I don't want some random person to read the very, very dumb questions I ask Gemini sometimes, but that's just text. I know what I'm sending. I can live with that. but with glasses on your face that have a camera and a microphone. The line between intentional and accidental gets a lot blurriier. I bring this up because earlier this year, a pair of Swedish newspapers published an investigation into Sama, a company that Meta contracted with in Nairobi, Kenya. Workers there were tasked with reviewing footage captured by Rayban Meta Glasses is part of training Meta's AI. Basically, to ensure that the responses Meta AI gives are accurate and useful. But some of what they described seeing was well, it was people's private lives. Moments that users almost certainly didn't realize were being sent anywhere, much less to a person on the other side of the world. So, this story is really messy. Supposedly, people's faces and identifying information that were supposed to be blurred, but evidently that wasn't always the case. And Meta has actually stopped working with Sama for this kind of work, leading to over a thousand people losing their jobs. There's no way around this being a really, really unfortunate story. In the interest of being as fair as possible, maybe too fair, though, this isn't a uniquely meta problem. Content moderation and AI training are brutal jobs, no matter who you're working for. And I think that the people doing that work deserve a lot more recognition and support than they get. Certainly not being unceremoniously laid off because they're no longer being worked with. But here's the thing. The practice of having humans review AI interactions is unfortunately still the industry standard, at least for now. AI agents reviewing other AI agents is rapidly becoming the norm. And after this disaster, I would definitely bet that Meta are fasttracking getting humans at the loop as quickly as they can. There is a solution here, though. Maybe just don't use Meta AI. None of this moderation or tagging applies if you aren't actively asking Meta AI a question, right? Well, technically, yes, but unfortunately, it's not quite that simple. I asked Himmel directly about opting out of Meta AI entirely, and he pointed to the ability to turn off the hey meta wake word, which is true. Open up the app and you can do that. But that doesn't necessarily stop any accidental triggers. Say I'm sitting down and just doing this. Suddenly, I have now triggered Meta AI and is listening to what I'm saying right now. Even when you turn off the wake word, it's not that hard to accidentally trigger them anyway. What they really need is a proper AI kill switch like Firefox has added. Now, when I was asking about this, Himmel did add that there is a physical power switch on the glasses. So, if you are genuinely in a situation where you're concerned about recording, you can just fully shut them off and of course still wear them as regular glasses. Fair enough. of me. That's actually a switch I kind of forgot was there, but I would still very much prefer not only a Meta AI kill switch, but also I would love it if there was a physical camera shutter. No, camera shutter. I keep forgetting which side it's on. I just don't think turning off your glasses solves the everyday reality of wearing these things for 12, 14 hours a day, right? Sure, there's certainly a portion of this which is just people not being used to having a camera on their face. I get that. I mean, that was a huge part of what killed Google Glass. But to this point, I've been looking at this whole story from my point of view. I know enough to be respectful when it comes to taking photos or videos on my glasses the same way I would be if it were a phone or anything else. But with stories of people modding the LEDs or using these for nefarious purposes, can you blame other people for not trusting them? And while yes, I was able to speak to a VP of Meta and ask all of my questions about privacy, does the average person actually understand what's being captured? You know, while I was chatting with him, Alex Himl said something that did actually really stick with me. He said that if people don't trust these glasses, they will be a failed product. And I think he's absolutely right. That is kind of the whole point of these videos. I want to trust them, but are people really ready to trust Meta? Look, there are other options out there. I tried the EVA realities. HTC is doing some interesting stuff with privacy, and Samsung does have glasses coming later this year with Google. And honestly, I hope they succeed because competition and lawsuits are really the only thing that's going to push meta here. What I do think is that the bar for privacy on something you wear on your face all day should be the highest in tech. Give me a real opt out, not turn off the wake word, a legitimate kill switch for Meta AI when I don't want to use it, and I guarantee that no human is looking at what my glasses see when I do want to use it. You know, we made a video a little while back about Google and how much of our lives we've handed over in exchange for convenience. And I keep coming back to that because this really feels like the same conversation with arguably higher stakes. Google knows your digital life, your searches, your emails, where you've been. And I think that smart glasses are the next step. I legitimately think that these are the most useful wearable form factor, even above a smartwatch. But now we're talking about a company that can see your physical life, what you're looking at, who you're with, what your home looks like. This is a case for the most restrictive privacy controls possible. And honestly, I think Meta knows this. If these glasses are going to be the future that they poured billions into building, earning trust isn't optional. It is the whole entire point. Now, if you want more from the privacy side of things, go check out our Google video. I think it really pairs nicely with this one. If you enjoyed, make sure to subscribe to the channel and ringling that dingling button. You excuse me, I'll see you in the next one, assuming our AI overlords allow me to come back again.

Get daily recaps from
Austin Evans

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.