3 Comments
Feb 18Liked by Carey Lening

Yes, I think we can agree broadly. From the recommendations perspective however, I probably view content and people recommendations as different than ad targeting, though they both use similar tech below the surface. The problem with the ad targeting version is that the advertiser gets a say in the matter. With content or user recommendations, this is a net benefit to all the parties w/o the need to get the content owner or the users input in the matter. I guess IMO the system gets corrupted when an economic actor can buy their way to get their interest met at the expense of the users’ interests in seeing the ads or in having the inferred information made available to others.

The “Do not sell my personal information” opt-out selection being made available to users is a good start, but it shouldn’t be opt-out to begin with, but rather opt-in. I know, I know, a girl can dream can’t she? 🤣

All this to say that I believe we’re in broad agreement and it was nice to read the finer details and legal references you included in this piece.

Expand full comment
Feb 18Liked by Carey Lening

Thanks for this analysis. It really helps tease out various issues. As someone who has been on the Internet since the dawning of its commercialization, it has been interesting to watch it evolve.

The framing that you appear to be bringing to the piece is one that starts from targeted advertising is a legitimate business model and we just need to find whether there are practical ways of limiting what can be targeted. This will become harder over time given the new technologies evolving. I view this a bit differently. Advertising on the Web started out as untargeted. While some had a problem with it on the basis of it polluting pages, it was understood given that model existed in newspapers, radio and TV. It wasn’t long before targeting on the basis of content became more the norm, as a result of the fairly static nature of the web. This was generally the form targeting took on content sites. The assumption being that a user interested in a piece of content might appreciate related ads.

Behavioral, collaborative filtering or user-centric targeted ads is basically what we live with today, but it’s not what people signed up when joining online communities. If before they signed up there was a big banner that said, “By signing up you allow us to target ads and sell data about you on the basis of inferences we make about you or based on your social conversations or on anything you share. You are giving us permission to share this information with advertisers, data brokers, law enforcement and other gov’t institutions or anyone else we deem appropriately interested in this information.”. I suspect that it would force people to think about it more. The notion that once users are in these closed communities, targeting changes are made and additional rights are extracted from them as the tech evolves, is somewhat despicable. It’s hard for most to understand the occasional “Read the changes to the Privacy Policy” that sites make available given the thick legalese used. Some sites do make an effort to write a version of these in a more accessible method, but in the context of use, it’s too hard for the user to appreciate the impact of what they are implicitly agreeing to.

I think Schrems has called out the status quo. He has gone to the place no one else dared go to, which was to challenge regulators, legislators and the largest tech companies on behalf of the average person. It’s hard and dirty work, but as the inconsistencies between what users expect and what companies are doing become more pronounced, he seems to be the only true watchdog since regulating bodies had mostly dropped the ball (until he arrived on the scene).

Being online should not result in the abdication of our rights to privacy, and it didn’t used to. While we can appreciate that tech’s evolution has enabled new capabilities that no one has been prepared for in terms of targeting users or use of data about them, on the other side of that, the regulators are doing everything possible to attack the use of tools for users protecting themselves (ie. encrypted communications, VPNs, anonymous handles, etc.). So what are we to make of all of this? Should we sit back and let these companies with legislators in their pockets or under lazy regulators, continue to enjoy the privileges that come from the arbitrage between the value of untargeted ads and inference-laden targeted ads, as well as secondary and tertiary data uses, at our expense? Ultimately, there’s no good reason why ad targeting exists other than to increase the rents web sites can charge advertisers. There’s no user value (as much as the sites try to espouse such).

As I view Substack’s model, I see one where quality of participants willing to pay for content is far more interesting than getting a high quantity of users who don’t really appreciate the content. Where a content provider wants to offer a free access option, that too is great as they do so with purpose, to build a reputation and a base who may turn into paid subscribers at some point. None of this requires taking advantage of the readers by creepily surveying them and selling info on their comments or on the series of articles they have read. Or being forced to write misleading headlines.

OK, enough soapbox talk, your piece just stirred a lot of thoughts for me and wanted to share some additional perspectives on this. Thanks again.

Expand full comment
author

I'm glad it stirred thoughts, but I do want to make one rejoinder:

I don't actually think targeted ads are legitimate. Far from it -- I've talked about the value of contextual ads as being net better in previous posts. We shouldn't have to settle for what's being done just because it's happening.

If the decisions in Schrems just affected Facebook and targeted ads, I'd be cheering from the stands like everybody else. But there's a whole lot of the internet that is also built on inference, and this reality will only compound with the growth of AI. There's utility, for example, in recommendation and suggestion services like what Substack offers for newsletters, or LinkedIn/Twitter/Bsky offer for friends to follow-- and yet, those are built on inference. I have not told Bluesky explicitly that I am interested in data protection, but it will recommend people who write on those issues because I discuss concepts in posts. There's also benefit in AI auto suggestions and assistance tools, and yet, as I pointed out, gen AI is an inference engine. I worry that in the rush to fix the abusive stuff, we might also swallow up some of the useful things as well.

As I said above, I think Max is actually right on this issue. But there will be interesting consequences if he's right and the court takes a broad approach.

Expand full comment