The growing accessibility and power of deepfakes and generative AI are causing headaches for fraud prevention professionals and forensic investigators, and the problem appears to be getting worse. Tencent Cloud is offering Deepfakes-as-a-Service, charging $145 to generate digital copies of an individual based on three minutes of video and one hundred spoken sentences, The Register reports.
The interactive fakes take only 24 hours to produce, and avoid the flat intonation that can sometimes alert viewers to the presence of a virtual human with timbre customization technology.
The Cyberspace Administration of China has put rules in place for generative AI that seem to require the products of this service to be clearly marked as such.
Criminals are demonstrating the nefarious uses this kind of technology can be put to, with Arizona outlet Arizona’s Family reporting an incident in which criminals faked a teenager’s voice in an attempt to fake a kidnapping. The purported kidnappers phoned the teenager’s mother and demanded a million dollars in ransom, threatening to harm her if the victim did not comply.
The teen’s mother alertly ascertained that her daughter was safe without paying, but AI experts are warning people to be alert to the possibility of similar fraud attacks.
Journalists, too, are finding a ready audience for their tales of AI trickery, with the latest example coming from a Wall Street Journal columnist who managed to trick her bank’s voice biometric system and family members, at least temporarily. Senior Tech Columnist Joanna Stern cloned herself with help from a professional generative AI service and an extra layer of voice technology.
Research from Regula indicates that roughly a third of businesses have already suffered a deepfake fraud attack.
Generative AI threatens digital forensics
Deepfakes were one of the four topics in focus at the recent EAB & CiTER Biometrics Workshop
Anderson Rocha, professor and researcher at the State University of Campinas and visiting professor at the Idiap Research Institute, presented a keynote on “Deepfakes and Synthetic Realities: How to Fight Back?”
“Deepfakes are just the tip of the iceberg,” Rocha says. Generative AI is overturning longstanding assumptions in forensics.
Multiple complete-yet-fake narratives are possible, with the ability to create synthetic video, audio, text and other kinds of data.
“The singularity” is a long way off, Rocha argues, but as Arthur C. Clarke noted, “any sufficiently advanced technology is indistinguishable from magic.”
AI is used in digital forensics to help identify, analyze and interpret digital evidence, in part by searching for the artefacts that are, at least in theory, left behind by every change made to a piece of evidence.
The problem of determining media provenance was raised to Rocha’s team in 2009, with a real world investigation into the legitimacy of photos of Brazil’s then-President published in news media. Rocha describes the techniques used at the time, and their evolution to include computer vision techniques, up until the explosion of data and advancement of neural networks changed the possibilities for manipulating photos and other evidence, around 2018.
Now, combinations of detectors with machine learning are necessary to detect the more-subtle manipulations that have become possible with AI. The pace of AI advancement, however, poses a constant challenge to forensic investigators.
The true threat of generative AI, therefore, on Rocha’s view, is not so much from deepfakes as it is from manipulations that do not leave detectable artifacts.