LOS ANGELES — According to a recent report, as seemingly with the internet itself, deepfakes were made for porn.
Cybersecurity company Deeptrace, which uses deep learning and computer vision to detect and monitor so-called “deepfakes” — synthetic videos created by artificial intelligence and based on existing media, such as the victim’s photo or a video clip — has revealed that there are nearly 15,000 deepfake videos now online.
Deeptrace believes that deepfakes present unprecedented cyber and reputation risks to businesses and private individuals — from believable fake news capabilities to sophisticated fraud, identity theft and public shaming tools — and while the initial controversy over the 2017 debut of deepfakes pointed to their potential to poison elections, the new report finds that 96 percent of them are porn.
That still leaves some 4 percent for social swaying and other malicious uses.
As many readers know, in its most common incarnation, deepfake porn places a person’s head on a performer’s body to make it appear that the person was featured in that scene.
Several legislative initiatives are underway to curb deepfake porn, such as in California, where it is against the law to produce or promote political deepfakes and victims of nonconsensual deepfake porn may sue the producers of this material.
From the first time a monkey used a stick to draw a set of tits in the dirt, to today’s most sophisticated AI neural networks, there’s one thing that we can count on: given any new technology, our artistic expression will always use it for something sexual — and that’s a far healthier application than trying to sway the next election.
For more information, click here.