ICYMI — Data Portability, Inference & Bottle Babies
A slightly revised version of my data portability article was featured on Techdirt, which is a huge honor. Mike Masnick is a mensch, and even gave me some good food for thought on how to further flesh out my arguments. The Techdirt piece is a much, much better version and you should read it.
More recently, I wrote a piece on what happens when everything about us becomes inferable, and how changes to, and shifting interpretations of the law + advances in AI, data analytics and other tech may lead to a deeply uncomfortable future. I sincerely worry that as legislatures, courts, and regulators take an increasingly absolutist view on these matters, the natural outcome won't be less data collection -- just more user fatigue. This preprint study by Nair, et al., is also worth a read. And here’s my post, if you missed it.
I wrote more from the heart in my piece on fostering kittens and the particular struggles and stresses that come from managing bottle babies. This article was probably more a message for future-me than any of you lot, but it does feature many adorable kittens.
PS: We still have two of the five kittens in our care, and they remain impossibly adorable and adoptable! Ping me if you're interested
Deep Dives — Intriguing Insights and Provocative Perspectives (> 20 minutes)
How to build a better search engine than Google - The Verge - David Pierce: Not exactly AI-related, but Pierce's piece exploring a recent attempt to unseat Google's search dominance is worth a read. Despite having built an arguably better search engine, the company (made up of ex-Googlers!) folded after only four years. So what happened? Well, Pierce and the development team behind the search engine (Neeva) suggest that Google's dominance in the marketplace + strategic distribution and integration decisions + intentional friction-in-design on the part of Google made it impossible for a new upstart to succeed.
Google’s real advantage is its other products. Android is the most popular mobile operating system on Earth, commanding about 78 percent market share. Chrome is the most popular browser, at about 62 percent. Google is the near-impenetrable default search engine on both platforms.
“People forget that Google’s success was not a result of only having a better product. There were an incredible number of shrewd distribution decisions made to make that happen.”
The Rot Economy - Ed Zitron's Where's Your Ed At: I shamefully appear to have missed this article from February 2023, but I'm glad Ryan Broderick linked to it, because it explores a complimentary and related theme to the problem of AI-driven rot -- growth-driven rot.
At the center of everything I’ve written for the last few months (if not the last few years), sits a cancerous problem with the fabric of how capital is deployed in modern business. Public and private investors, along with the markets themselves, have become entirely decoupled from the concept of what “good” business truly is, focusing on one metric — one truly noxious metric — over all else: growth.
“Growth” in this case is not necessarily about being “bigger” or “better,” it is simply “more.” It means that the company is generating more revenue, higher valuations, gaining more market share, and then finding more ways to generate these things. ...
Success in the marketplace isn't driven by quality, or good ideas, or sustainable growth, or even happy customers. In our current reality — this rot economy — it's all about "shareholder value" and growth for growth's sake.
This is why things like web3, "crypto", and the Metaverse persist, and why the term "zombie companies" exists as a thing at all. And to tie this back to AI, the rot economy is also why companies will continue to replace people with machines, regardless of how bad their outputs are or how negative the press coverage is. Ed makes an astute observation that the failure to punish crappy, growth-at-all-costs business models means there are few mechanisms left to weed out the genuinely bad shit.
Incidentally, I suspect the rot economy is why we are also currently in the regulation-for-regulation's sake mindset. We're all drowning in the consequences and desperate to create balance in the universe. Since the free hand of the market isn’t acting properly, we've collectively moved on hoping that regulators might.
Quick Reads — Snapshots of Intrigues and Curiosities (5-20 minutes)
When human knowledge becomes feedstock - by Luiza Jarovsky: Luiza, who writes a fair bit on privacy and its intersections with User experience (UX), shared the new-to-me concept of "Friction-in-Design". She pivoted off of a recent article published in the Yale Law Journal by Profs. Brett Frischmann & Susan Benesch, “Friction-in-Design Regulation as 21st Century Time, Place and Manner Restriction,” which explores why adding user friction to systems encourages us all to be a bit more aware and conscious of exactly what we (and technology) do online. For example:
Friction in the digital networked environment can come in many forms. It can be as simple as a time delay prior to publishing a social media post, a notice that provides salient information coupled with a nudge toward actual deliberation, or a query that tests comprehension about important consequences that flow from an action–for example, when clicking a virtual button manifests consent to share information with strangers.
This might sound a little ... nanny-stateish at first, but it makes sense if you analogize it to common friction-based systems we already interface with — e.g., confirmation dialogs before you delete a file or an account, the challenging, yet thrilling act of building IKEA furniture, the difficulties in changing our default search engine, or passing a driving test. Luiza also has a few thoughts on how all the sum of human knowledge is little more than feedstock for AI systems.
An endless world of boring robo art - by Ryan Broderick: We are now firmly in the "Trough of Disillusionment" stage of the AI Hype Cycle, and there have been a flurry of posts, including this one from Daragh, discussing how LLMs will, ouroboros-like, begin to eat themselves. This isn't surprising, and there's a ton of comical examples demonstrating why many of these articles are on the nose. Still, I liked Ryan's take on this subject, in part because he mused that the end result might be an internet of "automations communicating with other automations."
We might already be there — after all, you can get ChatGPT to write out a lengthy email and then the recipient can have ChatGPT summarize that same email into a few bullet points. But maybe this isn't a bad thing if it encourages us all to communicate more concisely?
Not AI, but definitely hitting on themes I've mentioned on here repeatedly — including weak v. strong link problems, fractal complexity, and short-sighted regulatory objectives— is a piece from Casey Newton: How the Kids Online Safety Act puts us all at risk. KOSA in the US (and the Online Safety Bill in the UK) represent regulatory acts that not only fail to solve the problem, but will actively cause harm to the world.
Light-Hearted Escapades — (<5 minutes)
Does AI Just Suck? - Freddie deBoer:
reminds us that AI systems like DALL-E, MidJourney and similar are basically little more than stochastic parrots — they generate convincing language (or images), but fail to understand what it is they're actually being asked to do. The machines are not smart, in other words. As deBoer observes:This is my experience again and again with these image generators: it’s not just that they’re typically artistically uninspired, failing to meet my subjective aesthetic standards. It’s that they so often fail to understand the most basic concepts of their prompts, demonstrating that there is no coherent internal logic to how they parse those prompts but instead an uninspiring network of proximal relations. ...
This gets back to the point that people often reject angrily, that the human brain is in fact not an association network generator like these systems but rather has certain rule-bound processes genetically encoded into it, most prominently the language instinct.
This issue is compounded by the fact that most people don't understand (or periodically forget) this fact, and rely on the likes of ChatGPT and its ilk for a variety of very important, rules-bound associative processes and mechanisms for understanding. Until we collectively figure this out, we’ll see articles recommending that visitors check out the Ottawa Food Bank on their vacation.
Legal Subreddit Bans All Ex-Twitter Links Due To Safety Risk | Techdirt: I try not to post about the moral decay of Twitter/Xhitter, but I do so enjoy fuck-around-and-find out examples in the wild. Here is such an example:
Elon Musk has decided to reenable accounts suspended for posting CSAM while at the same time allowing the most basic of CSAM scanning systems to break. And, that’s not even looking at how most of the team who was in charge of fighting CSAM on the site were either laid off or left.
And, that’s made Ex-Twitter a much riskier site in lots of ways, including for advertisers who have bailed. But also for anyone linking to the site. r/law, a popular subreddit about the law announced last week that it was completely banning links to Twitter for this reason.
Since Mike's post, advertisers have also continued to flee in droves (it turns out, nobody wants to have their ad for toilet paper or burgers or whatever associated with the CSAM-infested Nazi Bar). Whouda thunk?
PS: I Have five (5) Bluesky invites — anyone who wants one, please leave a comment!