Playing Whack-a-Mole With Deepfakes
Tech advancements plus ill-intent are creating a deepfakes nightmare. Can the legal system keep up?
When I was 15, I spent an all-night house party holed up in my friend’s dad’s office room, drunkenly kissing a boy named Tom.
The next day, Tom sent an email to all of our friends: “I f***d Emma while she was asleep,” he wrote. I remember my confusion. Was this a joke? If it was, would our friends realize that? And if it wasn’t, then what was it? A threat? A confession? An audacious brag?
In 2001, reputational damage was close to the worst thing a teenage boy could do online to a girl of the same age. But my eldest daughter, who turns 5 this year, will come of age in an online world in which the power dynamics are far more pernicious.
Already, “nudification” apps, which allow users to create deepfake pornography of unsuspecting victims, are invading schools. Governments are racing to introduce legislation to prevent their use without the subjects’ consent, but the availability and ubiquity of these apps means that trying to crack down on the deepfakes they produce is quickly becoming an impossible game of whack-a-mole.
The use of nudification apps is growing: A 2019 study from the cybersecurity company Deeptrace found 96 percent of online deepfake content was nonconsensual pornography. Research by the intelligence company Sensity found that in December 2020, there were 85,000 harmful deepfake videos online, a number which the company says had doubled every six months over the preceding two years. In January, X (nee Twitter) was forced to suspend searches for Taylor Swift after deepfake pornographic videos of her went viral.
Schools are trying to figure out how to respond to the epidemic. In October last year, boys at a school in Westfield, N.J., were found to have been sharing images of their 10th-grade female classmates “with exposed A.I.-generated breasts and genitalia” in the lunchroom and on the school bus, the reporter Natasha Singer wrote in The New York Times. The school’s response, according to the mother of one 14-year-old victim, was to suspend the perpetrator for two days.
“It seems as though the Westfield High School administration and the district are engaging in a master class of making this incident vanish into thin air,” said the mother during a meeting, according to The Times.
With schools barely able to keep up, states in the U.S. are trying to use the law to combat what’s rapidly becoming a systemic threat. In Louisiana, the creation or distribution of deepfakes of minors is now punishable by a prison sentence of up to a decade. Meanwhile, the U.K. last week introduced new rules making the creation of sexually-explicit deepfakes a criminal offense. But it might ultimately emerge that the combination of technology and ill-intent is far more powerful than a legal system straining to keep up with this digital wild west. Law enforcement officials have admitted that detecting deepfakes is proving, if not impossible, then certainly very tricky.
It is no coincidence that specific women and girls—pop stars, activists, teenage boys’ classmates—tend to be the victims. These images are almost always created to exert power over women, the modern equivalent of “I f***ed Emma in her sleep.”
It follows that most of the victims don’t have the means or knowledge to seek help or alert someone about what’s happened. Shame and embarrassment also play a role in keeping victims quiet.
The heart of this awful trend is not arousal, it’s power. The writer Laura Bates, the victim of a number of deepfake abuses herself, has told The Guardian it is “just the new way of controlling women. You take somebody like Swift, who is extraordinarily successful and powerful, and it’s a way of putting her back in her box. It’s a way of saying to any woman: It doesn’t matter who you are, how powerful you are—we can reduce you to a sex object and there’s nothing you can do about it.”