Opinion – The generative AI industry will be worth about $Aus22 trillion by 2030, according to the CSIRO government research organisation.
These systems – of which ChatGPT is currently the best known – can write essays and code, generate music and artwork, and have entire conversations. But what happens when they are turned to illegal uses?
Last week, the streaming community was rocked by a headline that links back to the misuse of generative AI. Popular Twitch streamer Atrioc issued an apology video, teary eyed, after being caught viewing pornography with the superimposed faces of other women streamers.
The “deepfake” technology needed to Photoshop a celebrity’s head on a porn actor’s body has been around for a while, but recent advances have made it much harder to detect.
And that is the tip of the iceberg. In the wrong hands, generative AI could do untold damage. There is a lot we stand to lose, should laws and regulation fail to keep up.
From controversy to outright crime
Last month, generative AI app Lensa came under fire for allowing its system to create fully nude and hyper-sexualised images from users’ headshots. Controversially, it also whitened the skin of women of colour and made their features more European.
The backlash was swift. But what was relatively overlooked is the vast potential to use artistic generative AI in scams. At the far end of the spectrum, there are reports of these tools being able to fake fingerprints and facial scans (the method most of us use to lock our phones).
Criminals are quickly finding new ways to use generative AI to improve the frauds they already perpetrate. The lure of generative AI in scams comes from its ability to find patterns in large amounts of data.
Cybersecurity has seen a rise in “bad bots”: malicious automated programs that mimic human behaviour to conduct crime. Generative AI will make these even more sophisticated and difficult to detect.
Read more: Advanced AI in criminals’ hands is a danger we should guard against