Originally, I had a series called ‘Bits & Bobs’ which got very low readership and was kind of a chore to maintain, so I dropped it. But this week, I’ve been reading a lot of insightful posts from others that I thought I’d put some of the thoughts I had here. My blog, my rules, baybee.
First up, on LinkedIn, Robert Bateman, who also writes the estimable Privacy Corner, posted some thoughts on the recent Scarlett Johannson / OpenAI controversy that I hadn’t thought of.
Robert reminded me that we’ve seen this all before, nearly 40 years ago. No, not with AI, but with automobiles. Here’s his quick summary:
As you'll know, Johannson is aggrieved that the "Sky" character in GPT-4o's text-to-speech function sounds so "eerily similar" to her, even after she declined an offer from Sam Altman to voice it.
In the late 1980s, Ford had asked [Bette] Midler to sing her 1972 number Do You Want to Dance for an ad. She said no, so Ford hired a Bette Midler impersonator to do it instead.
Robert dug a bit deeper into the case, but in short: Bette Midler, an actress, comedian, and singer with a unique, distinctive voice, sued Ford for appropriating her ‘voice’ by using an impersonator. Midler argued that Ford’s use violated her rights of publicity and personality rights in her voice, which is a tort in certain US states (like California). This tort is loosely grounded on a mish-mosh of property & privacy rights.
While Midler lost at the trial court level, the 9th Circuit reversed, observing that Ford had indeed appropriated (nay, ‘pirated’) Midler’s distinctive voice and identity, and thus infringed her personality rights. In short, Ford had, extracted commercial value from Midler, by effectively stealing her voice:
Robert adds (bolding is me):
"What they sought was an attribute of Midler's identity," the court said. "Its value was what the market would have paid for Midler to have sung the commercial in person."
"Why did the defendants ask Midler to sing if her voice was not of value to them?" asked the judge.
Why did Altman ask Johannson to speak if her voice was not of value to him?
Here’s my takeaway: I never realized that 20 years ago, when I first read the Midler decision in my Intro to IP law class, that it would be a portend for things to come. Or that those things would manifest before us, not as advertisements for cars, but as the (attempted) vocal embodiment of what I see as major technological change.
No, I'm not implying that ChatGPT specifically represents that major technological change, because I'm not a silly person. But I do think that our ability to apply machine learning to ever-more-complex tasks & problems, coupled with the popularity and accessibility of Gen AI does represent a major shift in how we humans exist and interact in the world. Kinda like how electric power and indoor lighting changed reality for so many. And as these things do, most people are likely, at least in the foreseeable future, to associate that change with first movers — like OpenAI and yes, ChatGPT. Just as they associated ‘electricity’ with the lightbulb and Thomas Edison.
For a brief minute, Sam Altman, with all of his hubris, hoped that people might associate this major technological change with ScarJo's stolen voice and identity. Her voice was of value to him, because he wants to fantasize ScarJo’s voice as the AI in Her as the voice of ChatGPT, and he probably hoped, the voice of artificial general intelligence generally, should that ever come to pass.
And like so many techbros before him, he refused to take no for an answer, and instead sought to invade Johannson’s rights, her autonomy, and her voice. I suspect we’ll be seeing much more of this — but perhaps the Midler decision can be used as a tool to limit at least some exploitation.
AI, The Idiot Ant Queen and Actual Threats
And further on the subject of AI…
Today while trying to cut down my extreme browser tab debt, I came across
He first discusses aspects of AI risk, AI safety & AI ethics, how they diverge, and his particular opinions on that. He also discusses David Chapman’s Better Without AI ebook. All of this is fine and interesting and all, but that isn’t what I wanted to talk about here.
The real fun is when Collin gets to leaf-cutter ants, their quest to survive, and their symbiotic relationship with a particular type of fungus that’s gathered for the colony by individual ants. It’s a very long explanation, and I encourage you to read it.
Throughout his long discussion of leaf-cutter ants, Collin weaves in some ideas and concepts to chew over— notably the concept of an “organismal individual” (i.e., the individual ant) versus the “colonial individual” (the individual ant colony). Leaf-cutter ants can’t really survive as “organismal individuals” — they don’t individually know about the fungus, where to get it, or why it’s important. That’s knowledge of the colonial individual — the colony itself. But no one ant (not even the queen!) imparts this information to the individual ants. They just seem to know (evolution, pheromones, and lots of other sciencey reasons explain this).
Or rather, the colonial individual knows.
And yet, the colonial individual can’t survive without each organismal individual passing along the same pattern, generation after generation, or at least doing the same set of things to find the fungus, bring it the leaves, etc. The colonial individual needs the individual ants to keep surviving.
In many respects, he argues, that’s kind of where we are with AI. The AI safety folks (and, IMHO the AI accelerationist types like Altman) either fear, or hunger for AI self-awareness, aka AGI. This is a misplaced fear, he argues:
But imagine having the same fear about ant colonies. What if the ant colony becomes self-aware and starts influencing the world directly? What do you even mean? An ant colony is a strategy to propagate information about an environment forward in time. Influencing the world through the medium of the workers is how it survives. What would it mean to say that it “wakes up”? That it consciously experiences the simultaneous sensory outputs of millions of workers and controls them explicitly? … But we don’t think of social insects as striving desperately to form a single conscious mega-ant. …
The thing to fear is the hijacking of our colonial individuals. These are the things that have grown so quickly in a small handful of centuries, the things that have made us much more powerful than chimpanzees. The analogy with ants is a lot less comforting on this one.
Things like law. And culture. And the sum total of all knowledge that we humans have amassed over centuries, which folks like Altman and OpenAI are exploiting to feed their massive algorithms. So far, these results are nominally useful, or at least novel. But what happens if what we start to get back from the algorithm increasingly becomes nothing but an “endlessly remixed slurry of stuff we liked before” and we forget how to discover novelties as organismal individuals? What happens if our laws and policies stop being generated by us?1 What happens if we drift closer to the individual leaf-cutter ant — unable to really function individually.
We might survive — after all, the ants took that bet and they’ve been successful for millions of years. But do we humans want to go in that direction?
When I originally started reading this, I thought he was going to go in a very different direction on this and confirm a long-held prior I had (about Hanlon’s Razor and institutional motivations). He momentarily did, but I’m glad I read through to the end. It makes for a much more interesting, and thought-provoking piece.
I think the takeaway I want to leave everyone with is that we all (myself included) need to be more mindful about what’s valuable — what matters to each of us as individuals, small groups, institutions, and society at large. But we should also be mindful of the long-term effects of this rapid change to the colony, and in particular, how we individuals influence and help evolve that colony with our own individual discoveries, curiosity, and endless drive for novelty.
AI seems to be moving extremely fast and in our rush to protect ourselves and rein things in, I wonder if we are actually prioritizing the right things. Or looking at the right risks. I worry that we might end up with systems and institutions that regurgitate “endlessly remixed slurries” of stuff we liked in the past, and that we will forget what useful things we’ve learned as individuals. Or how to learn new things, or the value in adaptation and evolution in the future.
Finally — here’s a picture of our kitty Maximilian Flooferton III, wearing a bowtie:
It’s already happening: Arizona state lawmaker used ChatGPT to write part of law on deepfakes (The Guardian)
Your commentary on ‘AI, the Idiot Ant Queen’ is really interesting. It reminds of a similar argument I saw in Abeba Birhane’s thesis, which is that AI actually in the end leads to stagnation. I wrote about it here: https://open.substack.com/pub/thecybersolicitor/p/does-ai-just-lead-to-stagnation?r=1urdan&utm_medium=ios