2 Comments

One of the biggest issues I have with legislation that flirts with breaking/banning E2EE is that the problem space it is attempting to deal with is often not fully set out. In particular, the big missing piece here I think is how E2EE environments are being used for illegal activity and therefore whether law enforcement having access to such environments would actually help with the detection and prevention of illegal activity. I have not come across particularly convincing evidence on the prevalence of illegal activity on E2EE platforms that could justify intervention by LEAs, especially given the severe privacy implications.

However, I appreciate that having a definitive answer to this is difficult because in a true E2EE environment, both platforms and LEAs do not have the visibility to make worthy predictions on prevalence. I know WhatsApp resorts to using "unencrypted data" and user reports, but I cannot imagine that this is sufficient for determining the true prevalence of illegal activity (or child abuse to be more specific): https://faq.whatsapp.com/5704021823023684

I've written previously about the lack of evidence on this from the LEA side of the debate (https://www.thecybersolicitor.com/p/notes-on-e2ee-and-client-side-scanning).

Expand full comment

That's because there isn't much in the way of evidence. It's all fear/emotion-based rhetoric because the activities (CSAM) are appalling and we humans do a shit job at handling it. It is a hard problem, and various well-meaning, but ill-informed people think 'technology' can solve the problem when it really can't.

Separately, there are a load of opportunists eager to make surveillance a thing. The digital Stassi exist -- they never really went away.

The bottom line is that breaking encryption does little to protect children, and much to erode freedom and privacy.

I'm glad you're also talking about this. We need all the sensible voices to be explaining why these proposals are so bad

Expand full comment