This story appears in the July issue of Utah Business. Subscribe.
“I think there’s a storm brewing on the horizon,” says Kirkland & Ellis attorney Devin Anderson, referring to the escalating threat posed by deepfake scams.
It’s a storm that could quickly become a flood of confusing and potentially harmful dupes invading nearly every industry. While deception is nothing new, the evolving and often shockingly realistic forgeries powered by generative artificial intelligence are giving cause for concern.
What is a deepfake?
Defined by Merriam-Webster as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said,” deepfake videos and audio messages are fast becoming a tool for those intent on causing harm. These technologies are increasingly used to create harmful content, such as pornography designed to humiliate victims or fabricated audio recordings featuring problematic statements, both of which are becoming more common in the news. Stories of fraud and financial harm are also on the rise as deepfake technology becomes more accessible and convincing.
Don’t expect this trend to dissipate anytime soon, Anderson warns.
“Many people might be susceptible to being taken advantage of simply by not realizing how far the technology has advanced,” he says.
Anderson notes that Utah is one of the few states trying to get ahead of the problem. The state has enacted SB 149, known as “the AI bill,” signed into law by Gov. Spencer Cox after Utah’s 2024 legislative session. This law includes the establishment of the Office of Artificial Intelligence Policy and a regulatory AI analysis program. Despite these measures, Anderson is concerned that the rapid advancement of technology may outpace legal protections.
Lehi, Utah-based company Attestiv is dedicated to giving people a fighting chance against deepfakes. This AI forensics venture, founded by experts in cloud computing, provides tools to help individuals and companies identify and defend against fake digital media. Able to analyze images, documents and video for fraudulent genesis, Attestiv’s larger goal is to eventually provide a free resource for anyone to use.
“The worst thing is for people to feel helpless and not know what’s real,” says Esteban Hernandez, Attestiv’s director of product. “We’re really wanting to give people the tools to help mitigate risks and empower them to navigate the consequences of generative AI.”
How can you spot a deepfake?
For now, Hernandez recommends giving everything, especially images that appear shocking, a closer look.
“There are still AI models that, for example, might not be able to actually spell words,” Hernandez says. “Say you’re looking at a generated AI image of a coffee shop. If the signs look like gibberish, it’s most likely generated. Also, you can look at people’s faces and see if there’s any kind of oddness in the eyes. Or you can look at their hands — sometimes there’s a third hand coming out of nowhere.”
However, those telltale errors could eventually vanish. As Anderson forecasts, a storm of scamming may be brewing. Education and a healthy dose of doubt will be vital, especially as the technology — and the opportunity to create chaos — evolves.
“I think, especially as we get into a highly contested election, there’s maybe a degree of skepticism we all should be bringing as we evaluate things,” he says. “In an online world where everything has to be reacted to in a snap, maybe we just need a little bit more of a pause as we come across these things.”