The European Commission (EC) has recently proposed new regulations that would require chat apps like WhatsApp and Facebook Messenger to comb through flagged users’ private messages for child sexual abuse material (CSAM).  “This is an impressively bold and ambitious proposal to systemically prevent avoidable child abuse and grooming which is taking place at record levels,” Andy Burrows, Head of Child Safety Online at the National Society for the Prevention of Cruelty to Children (NSPCC), told Lifewire over email. “If approved it will place a clear requirement on platforms to combat abuse wherever it takes place, including in private messaging where children are at greatest risk.”

End of End-to-End?

The regulation seeks to establish new rules for online platforms, collectively referred to as online service providers, and covers a broad range of services including app stores, web hosting companies, and any provider of “interpersonal communications service.” The one aspect of the proposal that has ruffled some feathers amongst the privacy groups is the obligations that would apply to messaging services like WhatsApp, and Facebook Messenger.  Under the proposal, if and when a messaging service receives a “detection order” from the EC they would be required to scan the flagged users’ messages to look for evidence of CSAM and other abusive behavior involving children. Instead of employing humans for the task, the proposal calls for the use of machine learning (ML) and artificial intelligence (AI) tools to peruse through the conversations. Margaritis Schinas, Vice-President for Promoting our European Way of Life, pointed out that the proposal also calls for placing safeguards to prevent misuse. “We are only talking about a program scanning for markers of illegal content in the same way cybersecurity programs run constant checks for security breaches,” noted Schinas in EC’s announcement. Bodies working towards safeguarding children have come out in support of the proposal. “This groundbreaking proposal could set the standard for regulation that balances the fundamental rights of all internet users while prioritizing child protection,” asserted Burrows.

Pitchforks And Torches

However privacy advocates argue that the proposal effectively discourages the use of end-to-end encryption. “By threatening companies with legal actions, the Commission is likely trying to wash their hands of the responsibility for dangerous and privacy-invasive measures, whilst de-facto incentivizing these measures with the law,” opined Ella Jakubowska, Policy Advisor of digital advocacy group European Digital Rights (EDRi) in a press release. EDRi argues that measures in the proposal jeopardize the vital integrity of secure communications, going as far as to claim that the new rules would “force companies to turn our digital devices into potential pieces of spyware.” It also takes exception to the use of AI-based scanning tools, referring to them as “notoriously inaccurate.”  Dimitri Shelest, founder and CEO of OneRep, an online privacy company that helps people remove their sensitive information from the internet, firmly believes that no government or social media apps should scan users’ private messages, even selectively.  “By legitimizing this kind of surveillance, we open Pandora’s box and create multiple opportunities to misuse the information obtained as a result of such privacy intrusion,” Shelest told Lifewire over email. Jakubowska agrees. In the press release, she asks if companies today are allowed to scan our private messages, what’s stopping governments from forcing them to “scan for evidence of dissidence or political opposition tomorrow?” However, it all might come to naught. Jesper Lund, Chairman IT-Pol Denmark believes some aspects of the proposal might not be implementable in the first place.  “The proposal includes a requirement for internet service providers to block access to specific pieces of content on websites under orders from national authorities,” explained Lund in EDRi’s press release. “However, this type of blocking will be technically impossible with HTTPS, which is now used on almost every website.” When asked if privacy violation was the only way to safeguard children online, Shelest replied with an emphatic “No.” He believes a real workable solution combines parental involvement with support from technology, which could aid parents keep tabs on their children’s online activities. “A good start would be for tech giants such as Apple and Google to provide wider abilities on their platforms that support parents with more advanced automation,” suggested Shelest. “The key is supporting parents in supporting their children.”