3 Comments
Feb 18Liked by Carey Lening

Yes, I think we can agree broadly. From the recommendations perspective however, I probably view content and people recommendations as different than ad targeting, though they both use similar tech below the surface. The problem with the ad targeting version is that the advertiser gets a say in the matter. With content or user recommendations, this is a net benefit to all the parties w/o the need to get the content owner or the users input in the matter. I guess IMO the system gets corrupted when an economic actor can buy their way to get their interest met at the expense of the users’ interests in seeing the ads or in having the inferred information made available to others.

The “Do not sell my personal information” opt-out selection being made available to users is a good start, but it shouldn’t be opt-out to begin with, but rather opt-in. I know, I know, a girl can dream can’t she? 🤣

All this to say that I believe we’re in broad agreement and it was nice to read the finer details and legal references you included in this piece.

Expand full comment
Feb 18Liked by Carey Lening

Thanks for this analysis. It really helps tease out various issues. As someone who has been on the Internet since the dawning of its commercialization, it has been interesting to watch it evolve.

The framing that you appear to be bringing to the piece is one that starts from targeted advertising is a legitimate business model and we just need to find whether there are practical ways of limiting what can be targeted. This will become harder over time given the new technologies evolving. I view this a bit differently. Advertising on the Web started out as untargeted. While some had a problem with it on the basis of it polluting pages, it was understood given that model existed in newspapers, radio and TV. It wasn’t long before targeting on the basis of content became more the norm, as a result of the fairly static nature of the web. This was generally the form targeting took on content sites. The assumption being that a user interested in a piece of content might appreciate related ads.

Behavioral, collaborative filtering or user-centric targeted ads is basically what we live with today, but it’s not what people signed up when joining online communities. If before they signed up there was a big banner that said, “By signing up you allow us to target ads and sell data about you on the basis of inferences we make about you or based on your social conversations or on anything you share. You are giving us permission to share this information with advertisers, data brokers, law enforcement and other gov’t institutions or anyone else we deem appropriately interested in this information.”. I suspect that it would force people to think about it more. The notion that once users are in these closed communities, targeting changes are made and additional rights are extracted from them as the tech evolves, is somewhat despicable. It’s hard for most to understand the occasional “Read the changes to the Privacy Policy” that sites make available given the thick legalese used. Some sites do make an effort to write a version of these in a more accessible method, but in the context of use, it’s too hard for the user to appreciate the impact of what they are implicitly agreeing to.

I think Schrems has called out the status quo. He has gone to the place no one else dared go to, which was to challenge regulators, legislators and the largest tech companies on behalf of the average person. It’s hard and dirty work, but as the inconsistencies between what users expect and what companies are doing become more pronounced, he seems to be the only true watchdog since regulating bodies had mostly dropped the ball (until he arrived on the scene).

Being online should not result in the abdication of our rights to privacy, and it didn’t used to. While we can appreciate that tech’s evolution has enabled new capabilities that no one has been prepared for in terms of targeting users or use of data about them, on the other side of that, the regulators are doing everything possible to attack the use of tools for users protecting themselves (ie. encrypted communications, VPNs, anonymous handles, etc.). So what are we to make of all of this? Should we sit back and let these companies with legislators in their pockets or under lazy regulators, continue to enjoy the privileges that come from the arbitrage between the value of untargeted ads and inference-laden targeted ads, as well as secondary and tertiary data uses, at our expense? Ultimately, there’s no good reason why ad targeting exists other than to increase the rents web sites can charge advertisers. There’s no user value (as much as the sites try to espouse such).

As I view Substack’s model, I see one where quality of participants willing to pay for content is far more interesting than getting a high quantity of users who don’t really appreciate the content. Where a content provider wants to offer a free access option, that too is great as they do so with purpose, to build a reputation and a base who may turn into paid subscribers at some point. None of this requires taking advantage of the readers by creepily surveying them and selling info on their comments or on the series of articles they have read. Or being forced to write misleading headlines.

OK, enough soapbox talk, your piece just stirred a lot of thoughts for me and wanted to share some additional perspectives on this. Thanks again.

Expand full comment