A Disturbing Encounter
Khasif Hoda was waiting for a train in Cambridge, Massachusetts, when he had an encounter that he clearly remembers almost two years later. A young man with “weird-looking glasses” asked him for directions and walked away. A few minutes later, the man approached him again, addressed him by name, and asked about his work on minorities in India.
“There was something that was unsettling, if you will, about this whole interaction,” he said. Hoda soon found out that the moment was being recorded on those “weird” glasses. He was included in a video, viewed more than 1.2 million times on X, that demonstrated how easy it is to record and identify complete strangers in near-real time.
In that video, technology was used behind-the-scenes to identify Hoda – it wasn’t seamlessly incorporated into the glasses. But privacy groups fear it’s just a matter of time.
A Growing Concern
On April 13, more than 75 advocacy groups issued a letter saying the wearable tech represents a “dystopian privacy invasion” that is “a serious threat to privacy and civil liberties for every member of our society.” The letter was in response to reports that Meta plans to add real-time facial recognition to its smart glasses.
Objections to the tech include:
- Real-time facial recognition could be used by stalkers, scammers, and abusers to identify and track their victims.
- The glasses are already used to record people without their consent.
- The technology could be adopted by law enforcement to surveil immigrants, people of color, and nonviolent protesters.
Meta glasses don’t currently include tech that can identify people in real time, but they do allow for discreet recording, leading to a host of privacy and legal questions. Other companies also make wearable tech, but Meta’s high-profile partnership with sunglasses brands Oakley and Ray-Ban has attracted significant attention and scrutiny.
The company referred USA TODAY to previous statements saying it is still “thinking through options” regarding facial recognition, and that per its terms of service, “users are responsible for complying with all applicable laws and for using Ray-Ban Meta glasses in a safe, respectful manner.”
Legal and Ethical Dilemmas
The glasses – which allow users to send texts, make calls, listen to music, and even translate text – have an LED light that Meta says makes it clear the device is recording. Others say the light can easily be missed or disabled.
Is secret recording with Meta glasses allowed? Whether someone can record another person without their consent and post that video online depends heavily on where they are and the broader context of the situation, according to Woodrow Hartzog, a professor at Boston University law school.
“Even though people often say there’s no such thing as privacy in public, the truth is a lot more complicated,” said Hartzog, who works in privacy and technology law.
Some states require everyone involved in a conversation to give consent for it to be recorded, he said. And there are several ways someone can sue in civil court for an invasion of their privacy, even if the interaction happened in public. Someone could be held liable if they publicly disclose certain private facts, depict someone in a false light, misappropriate their likeness for a commercial purpose, or intrude upon their seclusion in a way that’s considered “highly offensive to a reasonable person,” Hartzog said.
Meta said in a statement that “as with any recording device, people shouldn’t use them for engaging in harmful activities like harassment, infringing on privacy rights, or capturing sensitive information.”
The Push for Regulation
As technology advances, the law is starting to catch up, albeit slowly, Hartzog said. In February, one California state senator introduced a bill that would specifically prohibit secret recordings with wearable devices like smart glasses in businesses and ensure the recording lights are always visible.
Smart glasses have also been banned in certain locations, including Philadelphia courtrooms and the public areas of some cruise ships. Airmen are not allowed to wear them while in uniform, according to the Air Force. And the University of San Francisco issued a public safety warning to students after a man wearing Ray-Ban Meta sunglasses was suspected of filming women with the intention of posting the videos online.
It’s not clear how often people actually get into trouble for using this technology. Julian Sarafian, an influencer and attorney for content creators, said he’s never dealt with a legal issue related to hidden camera videos, even though most creators don’t have people in their videos sign waivers granting them the right to use and monetize the footage the way media companies often do.
“There are some people out there who have tried to claim that their copyright is being violated and therefore they need to be compensated for it, but the success on that has been very minimal from what I’ve seen,” he said.
Identifying Smart Glasses
Smart glasses can be tricky to identify because there are many different styles and they look so much like regular glasses. Even if you do correctly spot a pair, it can be tough to tell if they are recording.
The glasses produced by Meta in collaboration with Oakley and Ray-Ban are among the most common, with more than 7 million pairs sold in 2025, according to eyewear giant EssilorLuxottica.
Google has announced it is launching its own pair of AI-powered glasses in collaboration with Samsung, Gentle Monster, and Warby Parker.
One tool that can help identify them is an app that detects nearby smart glasses using Bluetooth data. More than 100,000 people have downloaded Nearby Glasses, according to creator Yves Jeanrenaud.
Jeanrenaud stressed that he is not a professional developer, so the app isn’t perfect (false positives can occur when VR headsets produced by the same manufacturer are nearby, for example). And it doesn’t confirm someone is secretly recording: not all smart glasses have cameras and a wearer could be using them for something else.
Still, he hopes the app will give people some peace of mind.
“There are a lot of people that are affected by this kind of technology, and like I said as scholars have shown one time after another, women and various minorities, especially queer people, are a main target of such privacy-invading technology,” said Jeanrenaud, a professor at Osnabrück University of Applied Sciences whose research interests include gender and science, technology, engineering and math.
Privacy Concerns Intensify
Meta glasses have an LED light in the corner opposite the camera lens which should automatically turn on when the user is filming. But that light can be hard to spot, particularly for people who are blind or have low vision, and can potentially be covered or disabled with the help of tutorials available online.
Toluwa Omitowoju learned just how hard to detect this technology can be after a stranger secretly recorded her while she was waiting at the airport in Washington, DC, in late 2025. As an owner of Meta glasses herself, Omitowoju said she noticed the man was wearing smart glasses, but she doesn’t remember seeing a light indicating they were recording.
The video ended up on social media, prompting friends and even strangers to ask her about it.
“This guy has exposed me to where people feel like they can just come up to me and shove things in my face,” she said. “So, I felt very violated.”
Omitowoju said she still sees value in smart glasses, noting that they can help people who are visually impaired like her grandfather. But the incident underscored the potential dangers.
“There needs to be more thought put into how we can protect – especially women – but just protect people and protect our rights in terms of our image being used without our permission,” she said.
The Threat of Facial Recognition
Facial recognition tech amps up privacy fears. The New York Times reported in February that Meta is planning to integrate facial recognition technology into its smart glasses, drawing even more scrutiny from advocates and lawmakers.
U.S. Sens. Ron Wyden, Jeff Merkley, and Edward J. Markey warned the company’s sweeping access to personal information makes the technology “uniquely dangerous” and the deployment “would erode longstanding expectations of privacy in public spaces.”
“In the hands of a bad actor, this technology could be a remarkably powerful and dangerous tool,” the senators, all Democrats, wrote in a letter to Meta Chair and CEO Mark Zuckerberg.
Hartzog said lawmakers need to create more specific rules tailored to the unique threat posed by the combination of facial recognition and nearly undetectable surveillance technology before it becomes ubiquitous. Those laws need to target not only people using the technology, but also the companies behind it, he said.
“A lot of these conversations tend to focus on, ‘Is it OK to use these glasses or this tool to surveil other people?’” he said. “What often gets lost in this conversation is, ‘Is it OK for companies to design these really socially hostile tools in ways that will foreseeably lead to massive violations of privacy?’”






