Meta's Wakeup Call, and Big Tech's New Reality?
Unpacking the Far-Reaching Impact of CJEU's Landmark Ruling in Meta Platforms v Bundeskartellant
Yesterday, Meta woke up to a very unpleasant Independence Day gift — another punishing decision from the EU — this time from the Court of Justice (CJEU). The case, Meta Platforms Inc, et al., v. Bundeskartellamt, C-C252/21, much like the Irish Data Protection Commission (DPC) decision released in May, addresses some of Meta’s core privacy and data protection failures, but with a new twist: this case was brought by the German Competition Authority, not the DPC. Oh, and it’s also not appealable.
It’s important to note that the Court of Justice’s decision doesn’t just ruin Meta’s holiday. For companies like Meta that build their profits off of our data, this decision will force a reckoning: the days of relying on a suite of legal justifications like performance of a contract or legitimate business interests to process personal data may soon be over. All hail consent as the lawful basis (!?!)1
The CJEU’s decision also broadens the scope of regulatory action (especially, but not exclusively against US Big Tech), by granting European competition authorities the power to investigate data protection violations that orbit competition, market manipulation, and consumer protection matters. Sorry guys, Ireland can’t save you now.
Finally, the fallout from this decision is going to mean that we all need to reassess the processing of combined and inferred data (i.e., data not provided directly from the individual or a third party), especially when inferences touch on sensitive details about a person, like their race, religion, health, or sexual orientation. I believe that this case has far-reaching implications, not just for Meta, but also for the entire tech industry, potentially heralding a radical shift in data collection and processing practices, access to services, concepts around consent, and more.
Caveat & Shameless plug: I hate reading CJEU cases. They are dry, overly verbose and in this case, not in English. Since the decision is in French, I do not speak French, and I am relying on automatic translation from Google Translate, it’s possible that I might get some bit of language or nuance wrong. If I’m wrong, I’ll update my conclusions accordingly.
Also, I wrote quite a bit on the May DPC decision, which you can find below:
Part Une: A Brief Summary of How We Got Here
In 2019, the German Federal Cartel Office (Bundeskartellant), Germany’s competition authority, initiated proceedings against Meta Platforms, Meta Platforms Ireland and Facebook Deutschland, alleging amongst other things, that Meta abused its dominant market position in online social networking by excessively processing German user data, relying on an invalid lawful basis for processing that data on the Facebook platform (‘contract’), and an invalid conclusion that data subjects have ‘manifestly made public’ sensitive or special categories data2 obtained outside of the platform via websites that include social sharing or other Facebook integrations (so-called Off Facebook data). By linking that information back to a person’s Facebook account, the Bundeskartellant argued that Meta was processing special categories data.
In February 2019, the Bundeskartellant determined that Meta had violated German law based on the above and issued an order prohibiting the social network from using data collected about German users, especially Off-Facebook data, and ordering the company to change its lawful basis for processing data from contract to consent. Meta appealed the decision to a lower court in Germany, which immediately stayed the decision and referred it to the Court of Justice. In its referral, the court asked a few procedural questions around competency of the competition authority to make determinations on data protection issues, and three questions I’ll dive into:
Does a user ‘manifestly make public’ special category data collected by Meta merely by visiting a website or application where sensitive data is likely to be collected directly or inferred, e.g., by liking a page relating to a racial or ethnic group, sharing a link on a gay dating site or political campaign, or buying a product or service?
Can Meta rely on the lawful basis of performance of a contract for processing personal data to targeted ads— i.e., is the ‘personalization of content and advertising’, analytics, research, and the like, essential for Meta to fulfill its contract of service with its users?
If not, can Meta rely on another lawful basis for processing, for example legitimate interests?
On the first question, the Court of Justice noted that Meta’s ability to link special categories data (based on Off-Facebook activity) to a user’s Facebook account, “may, in certain cases, reveal such information” about a user, without them directly providing it to Meta/Facebook (Para. 72). Therefore, this must be considered processing of special category data by Meta. Given that, there was no way to conclude that the user’s actions done outside of Facebook (e.g., by purchasing a product, or reading, liking, or sharing a page) meant that they intended to ‘explicitly and by a clear positive act’ to make that data public. Nor are they explicitly consenting to share that information with Meta (Para. 77-79).
On the second and third questions, boy did this get messy. First, the Court determined that Meta’s collection of data to target ads, perform analytics, assess user engagement and the like, is not essential in order to allow performance of a contract with its users. And since it’s not essential — since Meta can still provide a service to the user without creepy tracking, it’s not part of the contract. Second, Meta’s “dominant position” in the social media market is “likely to create a manifest imbalance … between the data subject and the controller … favoring in particular the imposition of conditions [in the contract] which [are not] strictly necessary” to provide services (Para. 149).
That makes sense: Personalization of content isn’t necessary, and less privacy-invasive alternatives (e.g., subscriptions or contextual ads) could be offered instead. The same holds for any Off-Facebook information provided to Meta from third party sites. Nor is there likely any other basis that Meta can rely on, bar meaningful, demonstrable consent. It probably can’t rely on legitimate interests for most of its processing activities either, largely for the same reasons identified above.3 On this, the Court noted: “[I]t must be considered that the interests and fundamental rights of such a user prevail over the interest of [Meta] in such personalization” (Para. 117). It also means that targeted advertising is very likely to move into a consent-based model, which means potentially more user control, with some pretty substantial tradeoffs.
Still, this makes sense: Meta can offer its services without processing our personal data (e.g., by providing context ads on search, or a subscription). It can also obtain consent when it comes to processing Off-Facebook data. That it doesn’t want to do this (because most users will opt out, and Meta won’t make gobs of money) is not an excuse.
Part Deux: Meta Is Just the Beginning
As I mentioned in the Splinternet article, Meta is awful, and because they (and many Big Tech firms) spent a decade fucking around with privacy, we are now, finally, at the “finding out” stage. Still, this decision won’t end with Meta, or even Big Tech. Fundamentally, the entire edifice of targeted ads, data mining, market strategies, and even AI and machine learning models will need serious reconsideration. And it doesn’t stop with data collected directly from users — it also extends to data that can be inferred about them. Inference adds more complexity to the mix, because it’s really easy to infer characteristics about people. Sensitive, deeply personal, economically and legally consequential things. Intentionally or not.
More importantly, the Court’s decision in this case gives competition authorities in Europe broader powers to pursue data protection violations. I predict a huge uptick in backdoor data protection cases brought against (primarily US) tech companies, laser focused on data extractive practices that are also ‘not essential’ in the performance of a contract with a user, or wouldn’t meet the standard for legitimate interests.
Now, this might force positive change from Big Tech. After all, nothing forces a company to change quite as fast as being told they can’t keep operating as a business anymore, or that they’ll be repeatedly fined by nearly 30 different regulators. It might also lead to more development and adoption of privacy-enhancing technologies and changes to existing business models that don’t involve siphoning data. But it might also lead to a Big “Techxodus” out of Europe or even more privacy theatre.
Another interesting wrinkle in the decision concerns the intermingling of special categories and other types of personal data, but before I dig in to that, we need to discuss the concept of ‘explicit consent’.
In general, the GDPR prohibits the processing of special categories and criminal data unless there’s an exception (Articles 9 & 10). Those exceptions are fairly narrow, and most are inapplicable to the types of businesses that engage in surveillance capitalism (or honestly, most capitalism). There’s no provision for contract or legitimate interests, for example. For data extractive companies, especially after this decision, there’s really only one option: explicit consent of the data subject. Explicit consent requires even more work than regular consent — notably an “express statement of consent” and companies are loathe to apply it because a person can revoke their consent at any time, for almost any reason (with a few country-specific exemptions).
Today’s decision adds even more complexity because now controllers and processors probably need to assume that data they collect, process, share and the like might also include special categories data:
However, it should be specified that, in the event that a set of data comprising both sensitive and non-sensitive data is the subject of such operations and is in particular collected in bulk without the data being able to be dissociated from each other at the time of this collection, the processing of this set of data must be considered as being prohibited, within the meaning of Article 9, paragraph 1, of the GDPR since it includes at least one sensitive data … (P. 89, emphasis added)
Remember what I mentioned about inferred data and how easy it is? Well, this decision, if read in full, means controllers and processors of data need to take a very hard look at the data they keep, and importantly, the data they infer about users, even unintentionally.
Part Trois: Example Time!
There’s a lot to unpack here, and Husbot always reminds me that sometimes this is better done with examples, so here we go.
Sexual Orientation: Say I’m Sundar Pichai, CEO of Alphabet. I decide tomorrow, that Google will go back to the olden days of context ads everywhere. No more cookies, no more trackers. I institute context ads on all products, stop collecting most information from users, and revert back to the pre-DoubleClick days of yore.
You’re a happy user of Google Maps. Google Maps still has context-based ads, but it has disabled location history and many of the personalization features — so if you do a search for pizza places in your area, you might see Pizza Hut prominently displayed but you won’t get a match rating anymore. But let’s say you’re single and gay, and want to find a good dating spot. You type in ‘Gay bar’ in Google Maps, and “Daddy’s Leather Bar” is highlighted. If you’re logged in, Google is directly or indirectly making an inference about you, in order to show you that ad. Somewhere, in a column in some database somewhere, Google might even be recording the fact that you searched for gay bars. Someone looking at that database could infer that you’re gay.
Now I (as Sundar) am in a world of pain, because I probably will need to obtain your explicit consent for this processing, even if I have no plans to process this information for other purposes. The fact that it’s collected is enough. Or alternatively, I can’t process it at all. That might involve some sort of complicated engineering under the hood to mask or not store ‘sensitive’ searches (e.g., gay bars, abortion clinics, hospitals, therapists, churches, etc.) It might mean providing less relevant information to you. Or maybe some new privacy-enhancing tech is developed. I don’t know.
Religion: Or let’s take a different example: I run a Christian newsletter on Substack. I don’t intentionally collect information about my users, but Substack does provide me with indicators when someone views, likes, or shares my articles. You like and share a number of my articles, which discuss core themes of Christian faith. I now have two pieces of information about you — your email or account name if you have a Substack account and an inference that you might be a Christian (or at least religiously-minded or curious). I now have a dilemma — do I need to obtain your explicit consent every time you read an article? Should I leave Substack and roll my own newsletter merely to avoid having to obtain explicit consent for data I don’t want in the first place? If all I care about is understanding what articles my readers like, does my intent or purpose matter?
Racial or Ethnic Origin: A final one, before I give up and resort to drinking: I’m the head of product responsible for Netflix’s recommendation engine. My team has just released a great new AI model that for once, will actually recommend good shows that Netflix actually has. You are a binge watcher of K-Dramas. While I’ve instructed the team to only focus on good movie choices, and not to infer other details about you, the ML model still infers. Somewhere, a row with your details includes the fact that you like K-Dramas and might be Korean. Or maybe an engineer could query the model to find out what types of users like K-Dramas and observe you’re Korean.
These are just a few examples. I could, given a bit more time, booze and thought, come up with probably a dozen more cases of benign, arguably beneficial uses that may require us all to radically shift our expectations, both as processors, and consumers of data. Fractal complexity is real, and the Court’s decision just exposes yet another dimension to consider.
I’m curious to hear your thoughts and observations. Am I being a Cassandra here? Am I overly pessimistic? What implications am I missing? I would love to hear from you.
ChatGPT reminded me that ‘performance of a contract’ and ‘legitimate business interests’ may not be obvious to everyone. Essentially, the General Data Protection Regulation spells out a handful of ‘lawful bases’ — a glorified term for ‘having a legitimate reason to process data’ that a controller (someone who does something with your data) can rely on. Article 6 includes six reasons, including consent of the data subject/individual, processing that’s necessary to carry out a contract where the data subject is a party (e.g., providing a good or service to users), processing to comply with legal obligations of the controller, processing to protect the vital interests of a person or another, processing for tasks carried out in the public interest, and processing necessary for a legitimate interest of the controller or a third party like a processor. The latter category involves balancing the rights of the controller or processor of information with those of the individual whose data is being processed.
Article 9(1) of the GDPR spells out ‘special category data’ as information including a person’s racial or ethnic origin, political opinions, religious beliefs as well as data concerning their health, sex life or sexual orientation. These categories of information tend to have a higher risk of impact to a person’s ‘fundamental rights and freedoms’ should the data be abused, leaked, or otherwise used in a way that doesn’t ensure adequate protections, and therefore, a heightened standard is applied for processing this type of data. Article 10 does the same thing with regard to criminal records. In general, the exceptions for processing data are even narrower than for regular ‘personal data’ and tend to be very targeted to specific processing activities, most of which aren’t broadly applicable to commercial entities. The one exception outside of express consent, was if the information was ‘manifestly made public’ by the data subject — which is to say, was realistically accessible to members of the general public. Publication alone isn’t enough.
Here’s a simple example: If I am a member of the Pirate Party, and I advertise my political affiliation on my profile on Facebook, which I only share with friends, I haven’t ‘manifestly made public’ my affiliation. However, if I am the leader of the Pirate Party, and run for a seat in the European Parliament, that fact would be manifestly made public.
I say probably here, because the CJEU has seemingly punted a final determination of legitimate interests and other lawful bases to the referring court.