I write about tech extensity, regulatory failure, and how power gets entrenched.
What is this newsletter about?
Hey there. My name is Carey Lening (aka, Privacat), and I write about what happens when big tech companies, billionaires, and importantly, the systems they create become too entangled to regulate or unwind.
I call this tech extensity.
Our world is increasingly shaped by these entangled systems, and by the small cluster of companies and individuals who build them. The most obvious examples are AI and social media, but behind the scenes, extensity exists in the form of surveillance tools, satellite communications, and the algorithmic tools that judge who we are on a daily basis.
I’m worried about how this spread creates dependencies and erodes our rights and freedoms. I believe that left unchecked, this will lead to the deterioration of rule of law, democracy, and the gradual disempowerment of billions.
This isn’t a foregone conclusion—we don’t have to resign ourselves to this fate. It’s my hope that by putting a name and some shape around the problem, we can collectively arrive at a solution before it’s too late.
If you also think this is a problem, you should subscribe.
A little about me
My friend calls me an ‘optimistic contrarian’, which is to say, I think we’re in a very dark place right now and it’s getting worse, but we can make it better if we don’t let nihilism defeat us. I’m neither an AI doomer, nor a techno-optimist, and I will snarkily call out bullshit on both sides when I see it.
I’ve spent two decades advising on data protection, privacy and technology law, information security, and the ever-growing list of disciplines that sit in the middle. I’ve been a journalist and a researcher, a lawyer, and an intel analyst. I served time at Palantir (2012-2014) and Meta (2018-2019). After leaving Meta, I’ve spent the past seven years atoning for my sins as a privacy and strategic consultant for technology companies and the public sector in Europe. This stuff isn’t academic for me; I know what works (and what doesn’t).
Over the years, I’ve gotten really good at identifying where things break down: in organizations, in systems, and in the gap between what the law says, what companies promise, and what technology actually does. I explain complex things in plain language, and don’t sugar-coat the complexities.
Some people tell me I’m funny. I swear a bit. My husband thinks I have a mild cat addiction.
If you’re a regulator, politician, tech leader, think tank, or NGO, or just have a good solution, we should talk.
Where to start
If you’re new here, I’d recommend reading a few pieces first to get an idea about my thinking and writing style. I’d start with ‘How Big Tech Becomes Ungovernable’, and then move on to the ‘Ladder to Nowhere’ series, where I lay out how OpenAI is trying to build itself into the informational substrate:
I talk about privacy nihilism here, and why the law is partially to blame:
For some lighter fare, check out my Privacy Disasters series, where I break down specific technological failures, why it matters, and what they should have done instead:









