Arrivederci, Venice, Hola Barcelona!
A few thoughts on this year's Venice Privacy Symposium, and an upcoming speaking gig in Barcelona!
This is a quick one, but first and most importantly: I will be in Barcelona starting tomorrow and leaving the 26th. I also have the great privilege of speaking at a joint ESADE/IAPP event in Barcelona on May 21!
The full details are here, but I’ll list the important bits, just in case you don’t feel like going to LinkedIn.
Date: Wednesday May 21 from 2025, from 7:30 p.m. to 9:00 p.m.
Place: Esade Barcelona (Av. Pedralbes 60-62)
Language: Spanish and English
Ticket for non-ESADE members: 40 € (but there will be cava!)
Three lovely members of ESADE will be kicking things off and introducing us: Josuan Eguiluz Castañeira (MUA ‘22 / Master in New Technologies and Intellectual Property ‘22), Ramon Baradat Marí, expert lawyer in data protection and new technologies in Cuatrecasas and IAPP Barcelona Chapter Chair and Rodrigo Quintas Ferrín, (DIN ‘08), member of the Club board Esade Alumni.
Topics:
Manel Santilari, lawyer at Clifford Chance and professor of the Master of Access to Lawyers and the Master of Specialization in ICT Law, Social Networks and Intellectual Property of Esade. He’ll be speaking on Artificial intelligence and data protection: a practical approach.
Me (I’ll be speaking on machine unlearning and the challenges of complying with obligations under the GDPR)
Francisco Pérez Bes, Presidency of the Spanish Agency of Protection Data (AEPD) who will be speaking about the AEPD’s strategy on the tensions between the GDPR and the AI Act.
If you can’t attend the event, but still want to hang out, ping me ([email protected])
Some Quick Thoughts on the Venice Privacy Symposium
Some readers may recall that I wrote about last year’s Venice Privacy Symposium:
In the previous post, I had lots of thoughts, mostly about how I think the regulatory and legal worldviews on the subject of AI and complexity were, shall we say, naïve. At this year’s symposium, I observed that the level of understanding seems to have improved, or at least, no one made me want to stab them due to their lack of clue. Progress!
There were definitely some highlight sessions. For example:
I really found the talk on Privacy in Space to be informative — there’s a lot of unique jurisdictional and data sharing challenges when it comes to all that data floating around in low earth orbit. I expect that this topic will only heat up as satellite technology improves and more tech oligarchs and nation states put even more competing satellites into orbit. I have some forecasting thoughts on that, which I’ll probably share in another post.
The discussion on Connected Vehicles was interesting, in that it touched on how consent and the ePrivacy Directive in particular, creates challenges for automakers looking to design ever-more-invasive vehicles.
I was … rather unsympathetic to most of the representatives speaking on behalf of the automakers here, partly because 90% of what they were complaining about could be rounded down to ‘we want data, don’t want to really tell you why, and we’re annoyed that laws make it hard.’ Still, one use case reminded me that there are some benefits to tracking technologies: theft detection & deterrence. Unfortunately, the ePrivacy Directive in particular fails to account for those uses.The panel on Privacy in Time and Space which sadly, did not include time travel or mentions of the TARDIS, offered an interesting window into how privacy has been conceptualized historically and culturally. Did you know, for example, that the concept of privacy rights even goes back to the Romans? At least for landowning men. Or that indigenous tribes in America actually have their own tribal legal systems, including laws governing privacy?
The discussion on how Privacy Saves Lives was powerful — the panelists crystalized exactly how the loss and abuse of data and physical privacy can damage individuals’ dignity and autonomy, and what that means when you’re trying to save lives on the ground. In 2014, former head of the NSA Gen. Michael Hayden famously shocked everyone when he said ‘we kill people based on metadata,’ but that isn’t the only way data leaks, data breaches, and bad privacy compliance can impact people.
For example, one of the speakers mentioned how during humanitarian crises dozens of random NGOs will come in, survey an affected community, demand loads of personal data and then just … leave. While I’m sure many of these NGOs use this data to do good work for the people they’re trying to help, the lack of transparency and accountability, coupled with repeated demands for data during times of extreme vulnerability & crisis, can take a toll on people and lead to a loss of trust, and feelings of disempowerment.
My bigger takeaway was just how lopsided our focus really is as an industry. We’re arguing over cookies, privacy notices, and whether legitimate interests assessments are completed, while refugees have virtually no physical privacy or bodily autonomy, and are worried that their data might end up in the wrong hands and used to target and abuse them.The roundtable on bias and fairness was packed with a stellar list of kickass women: Emerald de Leeuw, Shoshana Rosenberg, Chantelle Brandt Larsen, and Leila Golchehreh, and they all let their brilliance shine on stage. They didn’t stick in the usual terrain of announcing that bias is bad and we need fairness in AI (I mean, duh), but instead dug into some of the nuance, which was a refreshing change.
Finally, I enjoyed the deep-dive session on Privacy Preserving Technologies and AI, even though it ran a bit long. The first half was so informative at a technical level, and I’m sad that the Privacy Symposium folks stuck it on the very last day at 9am, and that only a handful of fellow nerds attended. The panel dug into the weeds not just about how AI/ML models work and learn, but also about privacy risks and vulnerabilities, including model inference and data leakage attacks.
While much of this wasn’t new to me, I think it would have been extremely informative for practitioners and data protection authorities. Instead, we had 10 panels navel-gazing about transfer impact assessments, five panels on conformity assessments, at least three panels fluffily discussing synthetic data badly, and an untold number of panels on DPA consensus building and working together.
My one criticism of the Privacy Symposium is that this year it felt substance-wise more like an IAPP event. It was high on names and titles, and low on substance and useful insights. It was also long on repetitive themes and worse still, product promotion. I won’t name names here, but one session was basically a paid ad in session form. There were way too many hour-long panels with seven speakers, for example, and too few digging into the weeds. I’m sorry guys, but nothing substantive is coming out of a seven-speaker panel, no matter who you invite, or how pretty the venue is.
Still, the Privacy Symposium definitely still out-ranks any IAPP event because I mean, just look at these pictures. The venues are quite literally breathtaking.



