Get ready for your evil twins
We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!
Earlier this year, researchers from Lancaster University and UC Berkeley published a chilling academic study. Using an advanced form of AI known as a GAN (Generative Adversarial Network), they created artificial human faces (i.e. photorealistic fakes) and showed these fakes to hundreds of human subjects along with a mix of real faces. They found that this kind of AI technology has become so effective that we humans can no longer tell the difference between real people and virtual people (or ‘veeple’ as I call them).
And that was not their most terrifying finding.
You see, they also asked their subjects to rate the “trustworthiness” of each face and found that consumers find AI-generated faces significantly more reliable than real faces. As I describe in a recent scientific paper, this result makes it very likely that advertisers will widely use AI-generated humans instead of human actors and models. That’s because working with virtual people will be cheaper and faster, and if they are also perceived as more reliable, they will also be more convincing.
This is a troubling direction for print and video advertising, but it’s downright terrifying when we look at the new forms of advertising the metaverse will soon unleash. As consumers spend more time in virtual and augmented worlds, digital advertising will transform from simple images and videos to AI-powered virtual people that engage us in promotional conversations.
Armed with a vast database of personal information about our behavior and interests, these “AI-powered conversational agents” will be highly effective advocates for any messages a third party pays them to deliver. And if this technology is unregulated, these AI agents will even track our emotions in real time, monitor our facial expressions and inflections, so they can adjust their conversation strategy (i.e. their sales pitch) to maximize their persuasive impact.
While this points to a somewhat dystopian metaverse, these AI-powered promotional avatars would be a legitimate use of virtual humans. But what about the fraudulent use?
This brings me to the subject of identity theft.
In a recent Microsoft blog post by Executive VP Charlie Bell, he states that in the metaverse, fraud and phishing attacks “can come from a familiar face — literally — like an avatar posing as your coworker.” I totally agree. In fact, I worry that the ability to hijack or duplicate avatars could destabilize our sense of identity, leaving us constantly unsure whether the people we’re talking to are the individuals we know or fakes.
Accurately replicating a person’s appearance and sound in the metaverse is often referred to as creating a “digital twin.” Earlier this year, Jensen Haung, the CEO of NVIDIA, gave a keynote address using a cartoonish digital twin. He stated that reliability will increase rapidly in the coming years, as will the ability for AI engines to autonomously control your avatar, allowing you to be in multiple places at once. Yes, digital twins are coming.
That’s why we need to prepare for what I call “evil twins” – accurate virtual replicas of the look, sound, and mannerisms of you (or people you know and trust) being used against you for fraudulent purposes. This form of identity theft will take place in the metaverse, as it is a straightforward amalgamation of current technologies developed for deep-fakes, voice emulation, digital twinning, and AI-powered avatars.
And the scammers can be quite extensive. According to Bell, bad actors can lure you to a fake virtual bank, complete with a fraudulent cashier asking you for your information. Or corporate espionage fraudsters can invite you to a fake meeting in a conference room that looks just like the virtual conference room you always use. From there you pass on confidential information to unknown third parties without you realizing it.
Personally, I suspect that cheaters don’t have to do this extensively. Meeting a familiar face who looks, sounds and acts like a person you know is a powerful tool in its own right. This means that metaverse platforms need equally powerful authentication technologies that validate whether we are interacting with a real person (or their authorized twin) and not with an evil twin fraudulently deployed to trick us. If platforms don’t address this issue early, the metaverse could succumb to an avalanche of deception and identity theft.
Whether you’re looking forward to the metaverse or not, big platforms are headed our way. And because the technologies of virtual reality and augmented reality are designed to fool the senses, these platforms will skillfully blur the lines between the real and the manufactured. When used by bad actors, such capabilities will quickly become dangerous. That’s why it’s in everyone’s interest, both consumers and businesses, to push for strict security. The alternative will be a metaverse of rampant fraud, a consequence from which it may never recover.
Louis Rosenberg, PhD is CEO of Unanimous AI and a pioneer in VR, AR and AI.
DataDecision makers
Welcome to the VentureBeat Community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers
This post Get ready for your evil twins
was original published at “https://venturebeat.com/2022/04/23/get-ready-for-your-evil-twin/”