Are we Already Living in an Optimal World?
Rethinking privacy & consent in the not-so-distant future.
I am currently sitting in London Heathrow waiting for a extremely delayed flight back to Dublin. I was here to present at the brilliant Gikii conference at UCL! It was a wonderful, intimate event, and I got to yammer at the audience on a subject that I’ve been meaning to write on for some time: namely, how the book Optimal by J.M. Berger left me deeply concerned about the premise of the notice & choice model of consent that so many of our data protection laws rely on.
Since the conference wasn’t recorded, I don’t have any video to share, so I’ll do the next best thing — recreate my presentation in blog form, and attach the humorous slides for your amusement. Just read it in my voice, or maybe I’ll get around to recording an audio of it. But not now, because now I’m sitting at Heathrow. Still.
I’ve made some minor edits and clarifications, but for the most part, this closely matches what I presented at the conference. I hope you like it.
BTW, if you haven’t yet, please consider listening to the new podcast
and I have been working on for the past month or so — Chance Conversations with Carey & Conor. Our first two episodes are up, and we have more coming. So far, a bunch of people think it’s neat, and the delightful guests we’ve had on (luminaries like Ralph O’Brien, Lisa Forte, Shoshana Rosenberg & Robert Bateman, for starters) have all really enjoyed it.And if you like this, consider sharing this publication with your friends.
The Premise of an Optimal World
“What is this privacy of which you speak?” she said, amused. “Does privacy exist?” (Chapter 4, Optimal)
It's the near future. The Algorithm Wars have ended, and the world has been optimized … for the better. Thanks to an omnipresent, albeit not exactly all-knowing algorithmic network known as 'The System', everyone is connected to everyone else, through nearly invisible wearable devices that they use to interact with the world.
To paint the picture, imagine that Sam Altman got his wish and we were all wearing AGI-level ChatGPT earbuds with Scarlett Johannson's voice telling us what to do, Her style.
In this world, liking and sharing is everything — from the special and extraordinary to the truly mundane. People argue over everything, from who manufactured the best watch, to the latest political scandal, all on a clearly Twitter-inspired social network.1
The Optimal world is one where the always-watching system feeds its inhabitants perfectly curated recommendations, algorithmic dopamine hits.
But following prompts and respecting profiles generally led to good results, and ignoring them led to outcomes that were less than optimal—not failure or disaster, necessarily, but friction and frustration. Distraction, wasted time, unnecessary effort, mild anxiety, lack of forward momentum... (Chapter 1)
The System optimizes for everything — what people eat, where they go, what exercises they do, who they should screw. It's a true, always-on lifestream with extremely strong nudging behaviors. It uses this data to make highly accurate recommendations about what might be best for every human under its gaze. It's constantly trained on new data, and able to draw new insights.
And it turns out, people in this world really like the System because it massively decreases all those pesky choices we humans have to make.
Almost literally too long to read in a human lifetime, the ToS arrived in the form of a word cloud displaying the most important terms and phrases—privacy, security, identity, personalization, sharing, permissions, storage, hold harmless, indemnify, content, intellectual property, worldwide license, and network access. The full text was available on demand, but who could be bothered? (Chapter 6)
In the Optimal world, privacy and consent still technically exist. Users can, theoretically, make a choice by accepting or rejecting the ToS. There's even a hint of transparency ... If a giant word cloud of legalese counts as transparency.
And people can turn the recording aspects of the System off sometimes. Personal lifestreams can be filtered, although their separate workstreams are constantly monitored and recording. Those details are property of the employer, after all.
Consent, Kafka, and the Individual Control Model
I’m going to shift gears a moment to discuss our non-Optimized world, and our current model of consent.
Most data protection & privacy legal frameworks rely on a 'notice-and-choice' approach, or what Professors Daniel Solove and Woodrow Hartzog refer to in their recent law review article, Kafka in the Age of AI, as the 'Individual Control Model'. The Individual Control Model (ICM) is based on the idea that individuals are empowered to make reasonable choices about their personal data, when provided with transparent information. For example, Article 7 of the GDPR is based on the ICM. For consent to be meaningful it must be:
Freely given
Specific
Informed
Unambiguous
The implicit assumption made by policymakers and pundits who advocate for the notice-and-choice / ICM, is that individuals must be given sufficiently clear information about the specific uses of their personal data in order to make a real choice. Inappropriate pressure or influence which could affect the outcome of that choice renders any 'consent' invalid; similarly, controllers can’t string together or bundle consent into something else (like a ToS), or pre-tick that ‘I consent’ box.
There’s No Notice & Choice in an Optimal World
Everything he said, everything he did, it all went into the System. (Chapter 8)
Now, obviously, in the Optimal world, even these basic notions of individual control don't exist. People can't give meaningful consent (or any consent really) for specific processing activities, because the System is using personal data in a completely opaque way, usually without their knowledge. What the System does, how it works, how it makes decisions are all left as unknowns.
And even if individuals, like the protagonist Jack, are given opportunities to review these ‘choices' when the ToS change, there's no easy way to opt-out in practice. When Jack tries to fight the System, by say, choosing a different path to jog, or a different food to eat, he finds that there’s friction.
Thus, there’s no meaningful choice in an Optimal world. Just a notional sense of power or choice.
And what had I given up, because I thought it knew me better than I knew myself? What price did I pay for that illusion of security? (Chapter 14)
Jack’s world is clearly meant to represent a dystopian vision of what could be, but it doesn't really feel all that different from what we have today. How many of us are influenced by the algorithmic choices made on our behalf about what to read, what to buy, where to holiday, or who to date? How many of us let the algorithmic chum-bucket of TikTok, or Twitter, or YouTube present us with helpful suggestions on what to focus our eyeballs on?
And with growth of life-logging Spy-wareables like the Humane and Friend pins, personalized AI assistants, neural tech that reads our brainwaves (or worse), always-on IoT, and omnipresent facial recognition (heyyo London!), the Optimal world honestly feels like it’s not much removed from our current world.
Privacy Norms are Weak, But Opting Out is Hard
Privacy is a norm, and not a very strong one at that. (Chapter 4)
As Professors Solove and Hartzog observe, we humans are all too willing to submit to the warm comforts of convenience, to capitulate to authority, or to cave into societal norms. Even if doing so might actually harm us.
Sure, some of us spend time trying to argue that we can 'own our data' or meaningfully control its use, but most of us still hit 'accept all' to cookies, rely on algorithmic recommendations, and own always-listening smartphones.
That’s because in practice, fighting against coercive, deceptive practices, and reading 30-page privacy statements is really hard, and EXTREMELY inconvenient. You could say, it's sub-optimal to go against all the technological progress we’ve made.
Even our best data protection pros have a hard time swimming upstream from these invasive, coercive practices:
The screenshot above, for example, came from a recent discussion on LinkedIn from colleagues who will remain nameless, where they lamented the fact that they were opted-in to marketing by Ticketmaster when they signed up for the Oasis reunion tour. This wasn’t even for concert tickets, mind you — just the chance to bid on concert tickets later that week. Now, Ticketmaster is the literal worst, but they aren’t alone in this practice. It has, through regular erosion and willful disregard for the laws, become normative. And that norm is, for most of us, stronger than the value we objectively seem to place on data protection and privacy — at least when it comes to consent.
A final warning appeared. Should he opt out of receiving the System’s services, he would not be able to rejoin the network at a later date. If he did not accept the terms of service, he would be an exile for life. (Chapter 7)
In the novel, Jack begins to notice glitches in the matrix (particularly in a Wikipedia-like encyclopedia known as Knowledge), and eventually realizes that the System, and all the underlying systems and institutions that makes up his world are lies. His quest then shifts to finding a way to permanently opt-out.
But in the book as in life, opting out has its own costs — for Jack — physically, psychologically & socially. And opting back in is nearly impossible.
In so many ways, our notice & choice model is as fictive as the choice architecture in Optimal.
That’s because the ICM doesn't work and the “control” we think we have is mostly a happy lie we tell ourselves to feel like we are empowered against what is fundamentally a collective action problem we can't individually fix.
My goal here isn't to shame people for operating within what feels increasingly like our own version of the System. I don't want to point fingers at my Oasis-loving colleagues or tell everyone to drop social media.
For me, the takeaway after reading Optimal was the recognition that we need to start planning for an eventuality where we admit that the individual control model cannot adapt to our immersed lives.
First, humans were inherently chaotic and could not be relied upon to act in their own rational self-interest. Any solution had to solve for unpredictability. Second, that unpredictability became exponentially more unmanageable when deployed at a scale of billions of people. So, any solution had to solve for scale. (Chapter 20)
In Kafka in the Age of AI, Professors Solove and Hartzog observe that people aren't passive victims but 'willingly participate in their peril' (p. 1032). Surveillance isn't always foisted on us against our will —most of us eagerly sign up for it in one form or another. They argue that the replacement for the ineffective ICM is to move towards a Societal Structure Model (p. 1026), where governments set limits on organizational power, by stopping the collectors and users of data from coercing, manipulating & exploiting individuals.
They call out laws like the EU Digital Services Act and perhaps the AI Act as an examples headed in this direction.
In truth, I’m not entirely sure that the Societal Structure Model would be any more successful than the ICM. For example, it relies heavily on enforcement and consistent application, and regulators can’t manage that in the current regime. I’m not convinced that would suddenly change given the scale of the problem.
However, I think its imperative that we begin moving away from the notice and choice framework we rely on today and start thinking of how we're going to maintain privacy and rights to our personal data as we continue to slide towards an ever-more Optimal world.
For paying subscribers, I’ve included a link to the Powerpoint slides, plus a very adorable kitten. If you want access, consider upgrading your subscription and supporting my work.
As always, leave a comment if you liked this, share it with your friends, or consider subscribing to Chance Conversations and Conor Hogan’s Substack if you haven’t already.
Keep reading with a 7-day free trial
Subscribe to Privacat Insights to keep reading this post and get 7 days of free access to the full post archives.