Home About Tags RSS
Table of contents Table of contents - click to hide

Lies about Chatcontrol, Part 1

Introduction

Recently (2022-12-01), the European Commission published an article about their planned legislation (called chatcontrol by critics). While the Commission has in the past cited statistics which are wrong, the published article is so stunningly bad in that almost every single sentence is strongly misleading, a blatant lie or flat out wrong. As it's rather short, I'll fully cite the article and explain what's wrong with each part. I assume you have some familiarity with chatcontrol. If not, you may want to read about what it is before reading this post.

Analysis

The title

Let's start with the article's title:

Detection systems of online child sexual abuse are reliable and save lives

It's the title, so I don't expect a long-winded explanation on what "detection systems" and "reliable" mean. However, I would expect "detection systems" to be able to detect all categories listed in the proposal (known abuse material, unknown abuse material and grooming). If this is the intended meaning, it is wrong to claim the existing "detection systems" are reliable: To be reliable they should have both high true positive and low false positive rates. Currently and at best this only applies to detection of known material. Detecting other formats is still far from "reliable".

The title as stated is either

For the "save lives" part, we'll see about that later.

Introduction

In May 2022, the European Commission proposed a law that will make it mandatory for service providers to assess the risk of child sexual abuse on their platforms and, where necessary, to implement preventative measures. Where prevention cannot reduce a significant risk, providers could also be ordered to run targeted checks for child sexual abuse on their products and services and report cases to a new dedicated EU centre.

The most accurate part of the article: A misleading explanation the proposal. Implementing "preventative measures" refers to adding age controls and/or scanning communication contents, introducing barriers to communication and/or severe privacy issues. Even if the service providers take reasonable measures to prevent abuse of their services, they can still be forced (ordered by a court in any member state) to scan the communication of their users - de facto outlawing end-to-end encryption. To be completely clear - this creates an illegal mass surveillance system. To claim it's "targeted" is ludicrous.

The justification

In many situations, providers are the only ones that can detect the abuse, as the child is too young or too scared to report it and their parents or carers are either not aware or are the abusers themselves.

So, let me get this straight: The EU chooses to focus on children which

and the Commission's answer is

When child sexual abuse is detected online, it helps the police to identify and rescue victims, ensure that images and videos of their abuse are removed quickly, and bring sexual predators to justice.

Well... no. Even working chatcontrol won't help, as

Also, given that police has a history of failing to report CSAM to providers and actively spreading CSAM themselves, the claim that "images and videos of [the victims'] abuse are removed quickly" is not exactly credible.

In short, this law will help protect children, support victims, and save lives.

It will not "protect children": Abuse material detection can logically only be done after the abuse has taken place - acting before the abuse occurs would only be possible with grooming detection. However, that technology is still an active area of research and not reliable at all, so it cannot be used to "protect children" either.

The named goals can be achieved without mandating a mass surveillance system, which will simply increase the huge backlog of material to analyze and therefore also won't help to "save lives".

Why this proposal?

The current laws are not enough

...but this doesn't mean that the proposal improves the situation.

This law is crucial in the prevention and fight against child sexual abuse because the current system of voluntary prevention, detection and reporting that some online service providers have implemented is not effective. Some companies take comprehensive action, while most take little or no action at all. These gaps in action mean that there are open spaces for abusers and risks for children, and abuse continues undetected. At the moment, companies are free to decide to change their policies at any time, which can have significant impact on children.

This is not the first time the commission claimed companies are not voluntarily detecting enough. As I've written previously, there are various reasons why the proposal will not solve these issues. A short summary:

Technical explanation

How detection of child sexual abuse online works

Or rather, a misleading explanation about how the most accurate detection system only capable of detecting known abuse material works.

The technology that detects child sexual abuse online has been used all over the world for over 10 years.

There is no single "technology" that can detect child sexual abuse online. Technology being used worldwide is also the norm on the internet and not indicative of software quality.

It has proved to be successful, effective and accurate.

... for some unnamed definition of "successful, effective and accurate". Six videos account for half of the reports by Facebook to the NCMEC (Ctrl-F "six"). People being able to upload the same abusive video millions of times to Facebook isn't exactly indicating that blocking abusers from uploading known material is effective for reducing their distribution.

Maybe the limited use of existing technology was "successful, effective and accurate" - I have no way to know, since the Commission didn't provide any evidence or sources for their claims. But what I can say with certainity is that while detection of known material is mostly accurate, the technology required by the proposal (unknown abuse material and grooming detection) is not accurate at all: With the current number of false positives and based on the commission's numbers, there would be hundreds of thousands of false positives. This makes me strongly doubt that the technology will be "effective" or "successful".

The technology does not read messages and it does not monitor photos and videos.

Yes it does. "The technology" needs to know the contents of the communication, otherwise detection is currently impossible. (There is academic research on how to process data in encrypted form (homomorphic encryption), but it is so far from practical use that even the European Commission did not seriously consider it, so the article is definitely not referring to that)

Instead, it converts all content into code and compares it with code that belongs to previously reported child sexual abuse content and to other indicators to identify child sexual abuse.

This description only applies to detection of known material. Unknown material detection and grooming detection work differently and are less precise.

The detection process is intended to work on trusted devices, usually servers operated by the service providers. Trying to perform detection on hardware owned by users (so-called client-side scanning) leads to various issues.

If there is a match, the content gets blocked and reported.

A simplification, but a mostly accurate one. However, automated blocking and reporting is a terrible idea: This case demonstrates that just because the detection is correct, doesn't mean there's a crime, so automated reporting and blocking is harmful even for "correct" detections. The issue gets worse when you consider that the technologies for detection are inaccurate, leading to false positives. Additionally, automated blocking & reporting could easily be abused by authorities by adding innocent images to the "code" list.

Sniffer dogs

Just like a sniffer dog looking for drugs in a suitcase, it only sniffs out illegal substances, and it does not care about the rest of the suitcase’s contents nor about the identity of its owner.

Yes, as everyone knows every store (service provider) is legally required to keep a dog which sniffs every customer for drugs and if the dog detects something the customer is instantly reported to the police!

Sounds weird, inaccurate and unrealistic? And yet, following their metaphor, that's exactly what chatcontrol would look like in the real world. Either this is deliberately misleading to emphasize how harmless chatcontrol would be, or they didn't really think through this metaphor. You decide.

Privacy-friendly mass surveillance

The new EU rules on combating child sexual abuse online will protect all users’ privacy and will help victims recover their privacy by stopping images and videos of their worst moments from circulating the internet.

Criminals are already sharing abuse material via their own illegal platforms, and often encrypt & store their material on legal 3rd party services. This proposal won't change that, and no amount of mass surveillance can change that. There is no realistic way of restoring privacy. The only thing that would stop images from circulating the internet would be to shut it down.

Detection of child sexual abuse will adhere to data protection rules and rules on privacy of communication.

Having the nerve to claim it will "adhere to data protection rules and rules on privacy of communication"... I don't even know how to respond to something that obviously wrong.

In addition, the law will ensure that detection systems are used only to find and report online child sexual abuse.

...until the law is amended to allow to scan more. This "detection system" can easily be adapted to any kind of images. Apparently there are already Members of Parliament advocating to use this system to stop spreading images of terrorism. Enforcing copyright would also be a great fit for this system. Even if the Commission honestly intends to use this system "only to find and report online child sexual abuse"... will their successors do the same?

Passing the law

Share the message

The EU’s goal is to make this law a reality by August 2024.

It's the European Commission's goal to make this law a reality, not the EU's. Members of Parliament have been rather critical of the proposal and Austria is currently publicly opposed to it. Internally, there have been many critical questions the Commission has been unable to answer.

If the law does not pass by then, the current EU regulation will expire, making most detection of child sexual abuse online impossible and therefore easier for predators to sexually abuse children without consequences.

The word "most" an exaggeration. Scanning private messages would be illegal, but this doesn't mean it'll become easier to abuse children. Given that grooming detection technology is still experimental, I would argue that at best the proposal would reduce the amount of circulating abuse images by a small amount, at least until abusers completely switch to running their own platforms or encrypt their data. There would not be a big difference compared to the regulation expiring.

If the Commission absolutely wants to protect children with technology, there are options:

There are also many alternatives which could help to prevent, detect and reduce abuse which don't rely on technological solutions at all!

The only thing that would really be lost in 2024 is the permission to spy on people's private messages. And to me, that very much seems like a good thing.

Final words

I'm pretty disappointed the Commission shared an article containing that many misleading and at times flat-out wrong statements. It's not the first time the Commission has shared misleading information - and it's probably not the last.

I cannot prevent them from making false statements. But what I can do is to make sure that everyone knows the European Commission is spreading false information. So that's what I will do, until this stupid proposal is off the table. Consider this part 1 of a new series.

Written on 2022-12-03
Tags: politics, chatcontrol, factcheck