Or what might be better titled: Privacy Summer Camp
I just returned from a rather last-minute decision conference – the Privacy Symposium in Venice. In a fit of YOLO, I decided the day before that I should go. It was the right call.
Not that Venice was a hard sell. I mean, it's Venice. It's gorgeous. There's wine. And pasta. And canals. And amazingly stunning architecture, and beautiful Italian clothes. Also, I happened to know that some absolutely brilliant folks were going to be in attendance, and therefore we would be able to drink wine together! Plus, a whole load of the talks actually looked interesting AND substantive.
General Thoughts
I don't attend many conferences, primarily because I don't find networking to be very enjoyable most of the time, so it was a big deal that I attended both the IAPP AI Governance conference in Brussels (two weeks ago) and the Symposium. It also made for a good comparison study. Both of these conferences, despite touching on similar themes, were worlds apart. The IAPP was light on substance, high on networking, observation and what I like to call 'hallway con' -- sharing gossip and learning through back channels, which is still valuable.
By contrast, the Privacy Symposium was all about feeling the pulse of where we are as an industry, and how privacy, data protection, new technologies, and yes, AI, fit in to the legal and regulatory climate. There was a whole lot I actually learned. Some of this was clearly also shared in CPDP (which I missed this year), so in a way, it was like a do-over for missing out in May.
Unsurprisingly, there were many regulators, and just buckets of lawyers in attendance. So many lawyers. So many data protection authorities.
The way I assess a conferences’ value (outside of y’know, picking up new clients or being paid to speak, obvs), is by the number of pages I fill in a notebook. If I can get two or three pages of new-to-me information, that’s an ok conference. Five to ten? That's a pretty good conference.
If I get close to 50 pages, that’s a damn good conference. At the Privacy Symposium, I felt like I couldn’t write things down fast enough – either at individual sessions, or during small group and networking discussions. Every day was quite literally, a school day. And I loved it.
With that out of the way, I thought I'd share a few of my takeaways -- what I learned, observed, and identified as major themes in the 20+ sessions I attended over the 5-day event.
Major Themes
Across sessions, if there was one repeated theme I heard, it was regulators, practitioners, and academics all struggling with and trying to make sense of all these new freaking laws, both within Europe and around the world.
There was much talk of ‘collaboration’, ‘convergence’, ‘task forces’, ‘interoperability’, and ‘guidance’, but honestly, it was clear that almost everyone is flying by the seat of their pants right now. I hope that all of this talk will lead to concrete action, and I look forward to reading upcoming guidance that has been promised by various regulatory bodies.
If anyone tells you that they have the solution to implementing the AI Act in practice for perfect compliance, or how to reconcile NIS-2, the Cyber Resilience Act, with the GDPR and the AI Act (while complying with all the other laws around the world), they are absolutely full of it, and probably trying to sell you something.
NIS-2 is likely to hit organizations like a bag of bricks and that should freak more folks out than it does. It certainly applies to more industries than most realize. Acts like the Digital Operational Resilience Act (DORA), by contrast, are far less likely to be impactful at scale, given that DORA only applies to the already heavily-regulated financial services and insurance sectors. Prayer was suggested as a preventative.
Repeatedly, and across multiple sessions, panelists lamented that the GDPR is not fit for purpose in the age of new tech. I was not surprised to hear this (after all, I wrote about the problem recently), but it was sobering to hear it repeated over, and over, and over again.
Consent (explicit or otherwise) as a lawful basis will only become more challenging as we're confronted with new technologies. After all, how do you get meaningful, informed (much less explicit) consent when it comes to always-on IoT, AI training data, inferred data, and wearables or other portable neural technologies (discussed below)?
And where consent is wildly inappropriate, how do you apply a lawful basis like legitimate interests to personal data that includes inferred or even inadvertently-collected/processed special categories data?
Professor Theodore Christakis who chaired the panel on Innovation Under Regulation, repeatedly asked panel participants whether large language models and other forms of AI could meaningfully meet the accuracy principle of Article 5 GDPR, rectification under Article 16, and briefly touched on the deletion question that’s been wracking my brain. A few options were suggested, including:
§ expansion of the risk-based approach to AI training & processing;
§ a lighter touch, “use-based approach” to accuracy and data subject rights;
§ the recognition of ‘incidental collection’ or processing;
§ wider application of legitimate interests.
However, nobody touched on what I think is a critical question: whether concepts like accuracy, data minimisation, and even application of data subject rights need a revisit as we try to apply deterministic law to nondeterministic systems (yes, this is a second shameless plug to the same article). I’m pretty sure I’ll write a blog post on this going into more depth.
There were some consistent rumblings and expressions by regulators from around the world that consensus themes would likely be arrived at -- focus areas, as it were, that regulators will be more likely to prioritize over everything. I think even they feel overwhelmed trying to understand and contextualize all the laws, with each country's disparate requirements, terminology, and complexity. But you know, I have some thoughts about that.
We All Need to Be More Tech Literate
Based on the conference sessions, questions asked, and chatter at the various lunches and networking events, I was left with the distinct impression that many folks were unaware of how brittle and gap-filled all of our systems are, technically-speaking. Remember gang, you can write whatever you want in the privacy statement, ToS, or data processing agreement, but that doesn’t stop a hacker in Russia from exfiling your data, or an unscrupulous AI company from scraping your content online. We all need to dig deeper and not just assume that encryption and access controls are sufficient. And as far as some regulators are concerned, we also need to be doing that for our sub-processors sub-processors, sub-processors, sub-processors...
Despite being exceptionally competent and thoughtful when it comes to law, so many people in this space (DPOs, CPOs, regulators, etc.) lack even a rudimentary understanding of what various privacy-enhancing and privacy-protecting technologies (PETs) actually do. Despite this, many speakers in this camp confidently asserted that we could fix loads of problems in this space by using the blockchain, or anonymisation techniques on LLMs, or by applying synthetic data to situations where it’s wildly inappropriate or impractical to do so.
The same is also true with regard to LLMs and AI generally. Most are, as I have explained elsewhere, trying to fit deterministic law into non-deterministic systems and that is just not going to work. I also repeatedly heard people conflate LLMs with search engines or databases. These are very different things.
Many of the technical talks were under-represented by lawyers, and that made me sad. For example at the Google-run talk on Trusted Execution Environments (TEEs) had like six people in it, and one of those folks was me, which was a real shame! The talk in particular, was fantastic, thoughtful, and the panel was very eager to engage and get into the weeds of how TEEs worked. I genuinely learned a ton. I can see a number of immediate use-cases for TEEs, including certain biometric applications, processing of special categories data where sharing full data would be fraught, age verification and identity proofs, and others. I will probably be digging into TEEs in more depth in a later post. I really want to see more sessions like this in the future.
Brendan Rowan of BluSpecs shared some fascinating insights on ‘industrial metaverse’ – human-computer interfaces used to perform tasks like remote repairs, robotic surgery & fixing planes mid-flight (!) using “digital twins”. Unlike the Zuckerbergian Metaverse, there’s actually concrete use cases and benefits to humanity. I would have loved to get into the technical weeds on this.
But there are also unique data protection challenges – for example, who is the ‘controller’ of an open/decentralized virtual or augmented metaverse? How do data subject rights and transparency get addressed within a decentralized system? What about transparency? Children? Jurisdiction?
We need more regulators, politicians, and DPOs, CPOs, and lawyers to get on board and really start to think, ask questions about, and understand the technologies we're interfacing with daily. Talk with engineers! They're fun! There are so many opportunities here and I worry that not enough of us are keeping up and grokking the fractal complexity we're up against. Yes, regulations and the law are important, but they should be grounded in reality, not wishcasting and unverified assumptions. We should put more investment into training and upskilling folks in the compliance/regulatory space on what technical measures, privacy-enhancing technologies, AI, LLMs and the like can and cannot do. A shameless plug: I'm happy to help.
Take, for example, the EU and US’ seemingly endless campaign to implement client-side scanning on devices and destroy end-to-end encryption to protect children. Regulators and politicians legitimately believe you can do this in a meaningful way that doesn't fundamentally destroy encryption for all. Robin Wilton of the Internet Trust, reminded us that forcing intermediary providers (Signal, WhatsApp, Threema) to break encryption for everyone in order to stop child predators is like suing auto manufacturers for individual bad driving.
Despite my griping, I did meet MANY attorneys, DPOs, and privacy counsels who blew me away with their insight and cross-functional awareness. These are learnable skills, people.
Neural Rights & Expanding Our Thinking on Privacy
Oh my god, there are so many interesting areas I haven’t considered when it comes to ‘stuff in need of regulation/understanding’ – take, for example, space privacy (which just sounds rad), brain and neural data use-cases, ‘wellness tech’, and the aforementioned metaverse.
For example, Stephen Damianos and his team at The Neurorights Foundation tracked 30 different consumer-facing neurotechnology companies, who market tech to individuals for a variety of purposes from wellness to fun. His team dug into company privacy statements, marketing claims, and other written materials, and discovered that user rights were either non-existent or inconsistently applied, that most companies failed to provide sufficient transparency, and often obtained “consent” in a way that would make regulators cry.
I have many thoughts on this subject, and I will probably dig in and write a whole blog post when I get around to it. That said, I was sad that the team did not look at any of the technical aspects of these devices – tracking what was logged/recorded by devices, and importantly, what was sent back and to where. Perhaps a future study is in order.
Professor DeBrae Kennedy-Mayo of Georgia Tech scared half the room when she suggested that various questionable actors would likely be doing the equivalent of functional MRI scanning outside of hospital settings, with handheld or portable scanning devices, without notifying those being scanned. Her suggestion? Body barriers – essentially, we wear functional Faraday bags to block out the bad actors.
Separately, we might need additional rights and legal guardrails for mental privacy, or a recognition of ‘binomial privacy’ concepts. This was a specific topic advanced by Carmen Muñoz del Arco representing the Spanish DPA and the Neuro Data Protection Network.
What’s Going on In the US?
The good ol' USA was represented well at the Privacy Symposium, and most speakers made a valiant effort trying to hand-wave the US’s sectoral privacy approach (with big, gaping loopholes around who counts as a person and ‘national security’) as being equivalent to the fundamental-rights approach within the EU. That said, I don’t usually ask trollish questions, but I couldn’t always help myself. To their credit though, I was pleased to learn at the US DPF Regulation deep-dive session that:
Privacy Act case decisions are actually public! (I still need to find the link to this — so if someone has it, LMK).
There are far more sectoral federal laws than I realized (you can take a peek here: fpc.gov)
The DHS Privacy Office can issue subpoenas?!
The US receives six times more disclosure requests for personal data than it makes.
The “covered countries” list under the Data Protection and Privacy Agreement with the EU the (which bestows non-US citizens in certain eligible countries with similar-ish privacy protections to US citizens concerning designated federal agencies) includes most of the EU.
E-Government privacy impact assessments are actually published for many participating federal agencies, though finding them can be a chore.
Mason Clutter of DHS, noted that the department is investigating the use of PETs like zero knowledge proofs (ZKP) for certain high-sensitivity processing activities. That is honestly inspiring.
Did you know that the EU-US Data Privacy Framework has been live for almost a year now? It’s up for review in July. Also, more than one regulator/agency head was hopeful that the DPF would survive, given the substantial changes made to oversight and redress. The initial challenge by MEP LaTombe seems to have fallen away, but that doesn’t stop some from questioning whether the DPF will be sufficient.
Other Bits
Convention 108+ was discussed at quite a few sessions. I don’t know if I heard anything new, other than people were thinking about it more.
Did you know that mass-scale data scraping of websites might be a reportable data breach?! Thanks to the panel on data scraping, welcome to yet another fun scenario you probably didn’t think about. Oh, and if you think robots.txt will protect you… have I got some thoughts.
OpenAI still calls itself a ‘research organization’ and says it out-loud with a straight face.
Next year, I am going to put together a "buzzword bingo" card and play along. Possibly with wine.
Finally, and most importantly, I met so many inspiring, brilliant people and broadened my LinkedIn network considerably. The networking sessions were genuinely enjoyable, and I want to give a shout-out to in no particular order to the folks at SPIRIT LEGAL, including Tea Mustać, Tilman Herbrich, and Christian Däuble; the always well-dressed Leila R. Golchehreh and Emerald De Leeuw-Goggin; Commissioners Dale Sunderland Des Hogan, Nicholas Espinoza, Alex Novikau , Ann-Charlotte Nygard, Ann Staggs , Simon Hania and so many others, including folks I mentioned above. I'm richer for having met each of you all.