Privacy and digital rights advocates are getting all riled up over this new law that’s got everyone talking – a federal crackdown on revenge porn and AI-generated deepfakes. Sounds like a win, right? Well, not so fast. The Take It Down Act, recently signed into law, makes it a no-no to publish nonconsensual explicit images, whether real or AI-created. Platforms only have 48 hours to take down these images upon a victim’s request, or else they face some serious consequences.
Now, while many are applauding this move as a victory for victims, there are some experts out there waving red flags. They’re concerned about the vague wording of the law, the low standards for verifying claims, and the super tight window for compliance. Could this lead to overreach, censorship of legit content, and even surveillance? India McKinney from the Electronic Frontier Foundation thinks so. She’s not really sure why this matters, but she’s throwing some shade at the whole content moderation thing – saying it always ends up censoring important speech.
Platforms have a year to set up a system for taking down nonconsensual intimate imagery (NCII). The law states that requests must come from victims or their reps, but all they need is a signature – no photo ID or anything fancy like that. Seems like they’re trying to make it easier for victims, but could this open the door for abuse? McKinney is skeptical, predicting more requests to take down images of queer and trans folks in relationships, and even consensual adult content.
Senator Marsha Blackburn, who co-sponsored the Take It Down Act, is also behind the Kids Online Safety Act, which puts the responsibility on platforms to protect children from harmful stuff online. She’s not a fan of content related to transgender peeps, thinking it’s bad news for kids. The Heritage Foundation, the brains behind Project 2025, also agrees that keeping trans content away from children is the way to go.
With the pressure on platforms to take down images within 48 hours, there’s a good chance they’ll just hit the delete button without really checking if it’s NCII or not. That’s McKinney’s worry, and she’s not alone. Mastodon, a platform that hosts its own server, might lean towards removal if it’s too hard to verify the victim. This could be a tough spot for decentralized platforms like Mastodon or Bluesky. The law says the FTC can go after any platform that doesn’t play ball with takedown demands – even if it’s not a big company.
McKinney’s got a crystal ball, and she’s seeing platforms getting all proactive about content moderation. They might start scanning stuff before it even goes out to avoid problems later. AI is already on the case, sniffing out harmful content like deepfakes and child sexual abuse material. Kevin Guo from Hive, an AI detection startup, is all for the Take It Down Act. He thinks it’ll push platforms to step up their game and tackle issues head-on.
Reddit’s using its own tools and team to tackle NCII, along with partnering with SWGfl for extra help. But how will they know if the person asking for a takedown is really the victim? McKinney’s not so sure about this monitoring business, thinking it might seep into encrypted messages down the road. The law doesn’t make exceptions for end-to-end encrypted services like WhatsApp or Signal. Companies like Meta, Signal, and Apple aren’t spilling the beans on their plans for encrypted messaging.
Trump’s given the thumbs up to the Take It Down Act, hinting he might use it for his own benefit. He’s been known to shut down opposing views, so this law could play into his hands. School boards are banning books, politicians are picky about what we see online – it’s a wild world out there. McKinney’s seen her fair share of content moderation drama, and she’s not a fan of both parties pushing for more control over what we can and can’t see.
So, what’s the deal with this new law? Is it a step in the right direction or a slippery slope towards censorship and surveillance? Only time will tell. But one thing’s for sure – the debate is far from over.