How Big Tech Becomes Ungovernable
The Problem With Tech Extensity
In my Ladder to Nowhere series I shared my concerns (and even a plausible scenario) around what a world might look like if a single company like OpenAI managed to integrate its entire AI ecosystem into our lives. It starts slowly of course, through convenience and utility, but ends up as a tool of surveillance, rights erosion, and oppression. As I said:
We’ve built an economy where comprehensive surveillance is not only necessary, but socially accepted. Our data and attention are how the internet survives. We’re also living in a time where Bay Area oligopolies and pay-to-play corruption is what’s come to define America, and fear of pissing off the deranged toddler in the White House is enough to paralyze sovereign nations.
I can’t stop thinking about this problem. About how we’re all sleepwalking our way up invisible ladders where we cede more of our data, our tools, our choices, and our lives to a handful of powerful, extremely well-connected entities. Not intentionally, or all at once, but gradually, then suddenly, as Hemingway wrote.
AI is part of this, of course. But it isn’t just AI. It’s about tools and infrastructure, and importantly, technological extensity.
Intensity vs. Extensity
First, we need to start with some definitions, which are crucial to getting to the heart of my thesis, namely Intensity vs. Extensity.1
Intensity occurs when a company or product becomes indispensable or necessary based on its quality or uniqueness, or, in the case of a person, their deep mastery or skill in a subject or field. Michelangelo was considered intensive—his mastery across different artistic domains made him a sought-after artisan during the Renaissance.
Famous three Michelin-star restaurants like The French Laundry in the US or Noma in Denmark, also have intensity. Their uniqueness explains why it’s nearly impossible to get a reservation unless you plan months in advance. Or take Shohei Ohtani, who has the rare quality of being both a phenomenal pitcher and batter. That gives Ohtani a ton of leverage within the realm of baseball, just as Michelangelo had in the world of art.
The thing with intensive systems is that usually they’re impermanent. Athletes retire. Chefs hang up their whites. Technology improves, and of course, products and services enshittify. So, while intensity may allow a company or person to become temporarily dominant and powerful within a market, that power is often short-lived. By contrast, I argue that extensity is where the real power is at.2
Extensity describes something broad in size or scope, that becomes deeply entrenched in a system. Unlike intensity, extensity is about spread, not mastery. You become extensive not necessarily by being the best, but by spreading out and becoming indispensable to the system itself. In 48 Laws, Henry Kissinger was cited as being an extensive force in geopolitics, diplomacy, and international relations. He was a fixture across administrations, and remained a power broker long after he left politics. Here’s Greene’s take in Law 11:
Henry Kissinger managed to survive the many bloodlettings that went on in the Nixon White House not because he was the best diplomat Nixon could find—there were other fine negotiators, and not because the two men got along so well: They did not. Nor did they share their beliefs and politics. Kissinger survived because he entrenched himself in so many areas of the political structure that to do away with him would lead to chaos.
Some of you might be thinking to yourself: ‘Hey idiot, none of this is new, we’ve already got a term for this when it comes to businesses: monopoly.’ After all, a monopoly represents complete control or dominance within a market. Horizontal monopolies give organizations the power to set the price of goods or services, dictate what is made available to customers, and create barriers to entry for potential competitors.
But horizontal monopolies, like extensive humans, aren’t necessarily guaranteed. Case in point: throughout most of the 20th Century, AT&T held a near-monopoly in telecommunications, cable television, and related professional services, before it was broken up in 1982. Microsoft was extensive in the browser, office productivity, and operating system market (they still are, to a lesser degree with Windows and Office365), so much so that the US government attempted (and failed) to pull an AT&T Part 2 in the 90s.
Still, when I think about technological extensity, it feels bigger than even a traditional monopoly. For one, I don’t think it necessarily requires that a company reach technical “monopoly” status at all. All that extensity needs is deeply rooted integration within the system in such a way that removal becomes effectively impossible without leaving major gaps behind. When I say “the system” I’m referring not just to software, networks, and infrastructure, or financial institutions and governments, but everything we come to depend on that helps keep society functioning.
This idea first materialized in the financial sector with the bailouts during the 2008 financial crisis. If a bank is “too big to fail,” that’s just a catchier way of saying that bank has become entrenched in the financial system.
We humans rarely learn from our mistakes, and so, we’re starting to see this more and more with Big Tech. Take Google, for example: Google commands 82% of the market in search, 66% of the market in web browsers, and 45% of the market in email, despite loads of competition in each product area.3 They have successfully dodged the monopoly moniker because legitimate competitors do exist.
And yet, people have been lamenting the continual decline of Google Search for years, and regularly complain that Chrome is a bloated, ad-laden, data vampire.4 Most everyone I know has a Gmail account, even if they loudly proclaim that they hate Google. To me, this indicates that we’ve come to rely on these products through a combination of network effects, habituation, and inertia, to the point that they’re part of the internet itself.
I’m also noticing this trend start to develop at a literal planetary scale when it comes to SpaceX’s reach. SpaceX’s evolution from a cool space company to potential “everything company” for Elon Musk, should freak people out way more than it does, and yet, it doesn’t. Guys: SpaceX was responsible 85% of all space launches in the United States. This one company launched almost twice as many orbital missions as China did in 2025. Starlink (which is part of SpaceX) alone made up 123 of SpaceX’s 165 launches in 2025, and lofted more than 3,000 Starlink satellites into orbit as part of the company’s massive 11,000 satellite mega-constellation. That’s 11,000 satellites out of a total 15,644 man-made objects in space right now.5
Meanwhile, over the span of what seemed like a long weekend, Musk managed to merge SpaceX with his AI firm xAI with nary a raised eyebrow by regulators. Musk’s other company, Tesla, invested $2bn in xAI in January. This is all part of his larger efforts to put data centers in space and colonies on Mars and to usher in an era of “amazing abundance”.
Now, I can’t predict whether Musk will ultimately be successful, but what his X-empire (xAI, SpaceX, Tesla) may very well succeed at is finding newer, bigger, and bolder ways to make Musk and his companies vital and necessary parts of everything.
This means that one company, nay, one man, who has an estimated net worth somewhere in the neighborhood of $690-852bn has, and continues to amass enough power, connections, resources, and wealth that he can not only ignore consequences, regulatory or otherwise, but also affect geopolitical outcomes by taking his toys away, or cajoling governments to cut-off funds to programs he doesn’t like or find value in. Don’t take my word for it—ask the Ukrainians whose Starlink access Musk has repeatedly restricted during the war, or the 550,000 children Musk and DOGE may have indirectly killed by defunding USAID.
Too Big to Fail?
Here’s a question: What happens when extensive tools or companies fail? What happens to society if we lose access to Gmail, or Starlink, if AWS or Azure die, or if the AI bubble bursts abruptly? How easy will it be for us to collectively recover now? What if we keep building these tools into more of our lives?
To answer this question, we need to talk about lock-ins.
And no, I’m not talking about the fun kind at pubs in Dublin. I’m talking about vendor & collective lock-ins.
Vendor lock-in is easy to see: So much of our lives (including the entire foundation of this post) are built around using technical tools supplied by a handful of companies to communicate. For many reasons (familiarity, habit, self-interest, marital harmony) I’m primarily a Google user—I use an Android phone, Gmail, Google Calendar, and Google Drive. Many of my clients use Google Workspace. I even use Gemini and Notebook LM (though not exclusively). These tools have crept into my life and I’ve grown incredibly reliant upon them all working together. I’m reliant not because there aren’t options, but because the very act of switching creates friction and like a diet, is extremely hard to maintain over time.
Last year for example, I tried moving all of my documents over to Proton Drive, because Google Drive isn’t end-to-end encrypted. Plus, I wanted to see if I could. The migration was painful and incomplete. Many files were only accessible in Google. I also had to give up after a few months because I was limited in what I could do in Proton Drive. Want to access a document shared on Drive by someone? Good luck with that—you’ll need a Google account. Trying to save that document on Proton? Fat chance—Proton can’t read (or even store!) .gdoc files. And you can forget about cross-platform collaboration. Some of this was due to Proton Drive being painful to use, but most of it was due to the fact that everybody else uses Google.
And that leads to the second type of lock-in: collective, or identity lock-in. The cost of leaving Google (or Apple, or Meta, etc.) isn’t just inconvenience, it’s also about shattering the identity, friendships and connections that has evolved around ‘being online’. This is most often cited in relation to social media, but it’s starting to creep up in terms of AI. Resistance is increasingly becoming, to quote the Borg, futile.
And there are social costs. For example, during the pandemic I tried to actively stop using WhatsApp, but found it was essentially impossible in Ireland, because WhatsApp and Facebook had at some point become the de-facto messaging platforms and communications channels in the whole of the country. Partly this is because the state of SMS and MMS sucks, but the root cause is irrelevant. It’s hard to fight Big Tech when you’re isolated in your house during the pandemic and can’t talk to most of your friends.
Our tech tools, and the algorithms that drive them, have helped to define who we are. Platform-mediated reality is creating incompatible epistemic communities and belief systems, which is to say, people are increasingly likely to interpret the same event wildly differently based on where they interact online. We all know that what we read and who we follow is increasingly being decided for us by recommendation engines and opaque algorithms.
But it’s not just that: research reveals striking differences in opinion about major news events based on a user’s platform-of-choice (X, cable TV, Facebook, podcasts, etc.), while charitable giving studies show how fundamentally different priorities across political ideologies have intensified. Americans in particular, increasingly inhabit entirely different informational spheres, which in turn, shape individual identities.
AI, of course, isn’t helping any of this. For example, a recent Syracuse University study found that 27% of users formed deep emotional bonds with OpenAI’s GPT-4o, with some people literally in mourning OpenAI retired the chatbot last week. This kind of psychological entrenchment leads me to worry that the biggest companies are not only too big to fail, but also that they’re increasingly becoming too big to govern.
Too Big to Govern?
We’ve already seen a hint of this when it came to the TikTok ownership drama. First there was the 14-hour ban in January 2025, which led to such a backlash by users (and politicians who use TikTok) that the Trump administration hit the pause button on a policy choice Trump himself had championed in his first term. And while it’s true that OG TikTok is now effectively dead, users can’t seem to quit the reanimated, right-wing-controlled zombie that replaced it. Here’s CNBC’s take:
Survey data from market intelligence firm Sensor Tower show that, despite a surge in deletions following the announcement of TikTok’s U.S. joint venture on Jan. 23, the average number of TikTok’s daily active users in the U.S. remains around 95% of its usership compared to the week of Jan. 19-25.
SimilarWeb data indicates even fewer defections. According to their January 2026 data, TikTok shed only 0.76% of its US user-base between November 2025 and the end of January 2026.
Now, I’ll concede that losing anywhere between 1-5% of active users is still losing, it’s still indicative of a larger trend: most people are happy to stick around no matter who’s calling the shots. They’ve built at least some part of their identity and habits around TikTok, no matter which shadowy set of billionaires actually runs the show. So, the government might be able to change who “owns” TikTok (though ByteDance still maintains a 20% stake), but they can’t change what TikTok is or break its hold on users. That’s the difference between regulating a monopoly and trying to govern an extensive system.
Oh, and apropos of nothing in particular: https://www.tiktok.com/@realdonaldtrump
To me, this is extensity in action.
Or consider the recent Grok sexual deepfakes saga. As I wrote in January:
Over the last week or so, people discovered (again) that X, and in particular, X’s chatbot Grok, has become a veritable garbage-making factory as it spews out hundreds, if not thousands of images of actresses, regular users, and even children as young as two, in sexualized poses, varying states of undress, and being assaulted or abused. And of course, because it’s X, none of this is consensual, and most of it targets women and girls. This has all been going on since around December 28, according to press reports.
Child sexual abuse material and other non-consensual imagery was freely peddled on the everything platform (which as I noted, at least for CSAM, is illegal nearly everywhere), and almost nothing happened. Sure, there was a lot of press and outrage, and a few brave governments amended their laws or actually banned Grok (but not X). But most western governments mugged for the cameras and otherwise sat on their hands. Some governments over here in Europe pretended to be outraged. For example, France made headlines (overshadowed by the SpaceX/xAI merger, naturally) after authorities raided X’s Paris office in response to X’s delayed response to the Grok-pocrisy. And yet, here’s President Emmanuel Macron posting on X today:
Likewise, here’s the EU Commission President, Ursula von der Leyen, who was “appalled“ by Grok in January, posting on X on February 14:
Maybe von der Leyen and Macron and all the other countless politicians who keep posting on X really do want to regulate in their hearts, but they can’t. X has become so extensive that despite openly peddling CSAM and Nazi apologia, it continues to be the beating heart for political outreach around the world.
Moloch, Agency, and the Race to the Bottom
I recently read a very old, but very good piece on Slate Star Codex (now Astral Codex Ten) by Scott Alexander which I’d stupidly managed to somehow miss for over a decade. In Meditations on Moloch, Alexander attributes our broken, deeply dysfunctional system to Moloch—the Carthaginian demon god who doubles as the personification of industrialization in Allen Ginsburg’s famous work Howl and Other Poems. Why is the system shit, Scott and Ginsburg both ask? Moloch!
The implicit question is – if everyone hates the current system, who perpetuates it? And Ginsberg answers: “Moloch”. It’s powerful not because it’s correct – nobody literally thinks an ancient Carthaginian demon causes everything – but because thinking of the system as an agent throws into relief the degree to which the system isn’t an agent.
Scott later reminds us that Moloch is essentially us. The agency isn’t the system, but it’s what’s built into the systems we create. And even though Scott wrote this in 2014, it’s arguably also the literal AI agents that are increasingly running more of the show.
But this agency, and the modern-day Moloch we’re up against is also embodied in the Big Tech race-to-the-bottom mentality, the willingness to sacrifice values, morals, and accountability, like the Punics sacrificed so many children. It’s in the mindset of taking any risk just to be first, to hell with the consequences, and the willingness of governments, regulators, and people with power to sit by and just let it happen.
Once one agent learns how to become more competitive by sacrificing a common value, all its competitors must also sacrifice that value or be outcompeted and replaced by the less scrupulous.
Now, Scott was referring to agents in the classical sense here: entities or individuals who act, exert power, or produce independent effects, usually (but not exclusively) on behalf of another.
But there’s nothing that restricts this to human or even corporate agents. To me, it seems entirely plausible that some of the technical systems we are developing are themselves becoming agentic, by producing effects and exerting some degree of power over us on behalf of someone else. I’m not quite at the level of asserting (as my learned friend Mahdi Assan has) that “algorithms” generally have this property, but I don’t think he’s wrong if one considers “algorithms” collectively, i.e., as part of a larger system or set of systems and tools working to accomplish goals on behalf of their creators.6
In a normal, healthy capitalist system, customers, shareholders, and regulators decide with their wallets and their rules who lives and who dies. Fit, beneficial, lawful, and productive companies survive, unfit, unlawful, or unproductive companies go bankrupt or otherwise cease to operate.7 And historically, this has mostly been true. Millions of bad companies have gone bust. A smaller number of firms were broken up, forced to restructure, or otherwise regulated into changing their behavior.
But we’ve never faced capitalism in a world where a handful of companies have managed to amass the level of power and wealth that exist today, with the ability to engineer systems that are so intertwined and spread across so much of our lives. The technology on the market today is becoming too big to control.
Right now, there are no real barriers—no meaningful bulwarks or disincentives to stop what appear to be a handful of men from essentially owning all of us. Musk’s dream of “amazing abundance” fails to answer an important question: amazing abundance for whom?
There’s also no accountability either, because everyone with the power to actually do something is too busy using the tools they’ve sworn they’ll regulate. Yes, we’ll get a few token fines, or threatened actions here and there, but that’s part of the theatre. Yes, the companies might pretend to be chastened for a time, but that will only teach them to be less obvious about their intentions.
There will always be talk about content moderation or banning Facebook, or X, or TikTok, or regulating Google, Apple, Amazon, or maybe even SpaceX, but nothing meaningful is likely to come of it, because why would it? How could it? In truth, regulatory responses seem to fall into four camps:
YOLO, let the planet burn (the US)
pearl-clutching and regulating by press release through a handful of token fines that sound impressive but aren’t, because the regulators fear the consequences (the EU, Brazil)
developing government-run corporate counterparts (China), or
quietly ignoring the problem and hoping the deranged toddler-in-chief and his corporate overlords will focus on the bigger fish and not tariff them out of existence (most of the rest of the world).
Fun Fact: Ireland has levied over €4.04 billion in fines against Big Tech companies over the last six years, primarily against Meta. Of that total, just €20 million has been collected according to a January 2026 FOI disclosure filed by Ken Foxe. Most of that holdup related to a court case brought by Meta and its subsidiary, WhatsApp, who sought to annul the fines.
Fun Fact #2: The EU Court of Justice sided with Meta.8 If I hear one more person scream ‘but fines!!!!’ I’m going to lose my damn mind. Fines only work if they’re enforced, but if the companies being fined have captured the enforcement mechanisms (or can tie things up in court for years), they’re little more than theatre and bluster.
Now ask yourself, what will this situation look like if someone like Musk or Bezos actually succeeds and takes this whole affair interplanetary?
We’re already seeing how Big Tech influences governments and shapes narratives. But just imagine this in five or ten years. Imagine a multi-trillion-dollar SpaceX, Google, Amazon, Meta, Oracle, or Microsoft (or a consortium of them), bolstered by super-intelligent AI systems, doing basically whatever they want. It’s all well and good to have laws, but if a handful of corporations become effective states unto themselves—suppliers of the information, infrastructure, energy, technology, supply chains, and even the money— what even are laws at that point?
And while the US is a lost cause (and will continue to be so for a decade), over here in the EU, regulators are still framing things in the context of classical monopolies and anti-competitive behavior. We’re still trying to impose old rules on entities that are increasingly becoming so integrated into the system that they are effectively ungovernable. We’re all still using Microsoft, Google, Apple, Facebook, Instagram, X, and OpenAI because Europe doesn’t have anything to replace it that’s even remotely comparable.
See, unlike the AT&Ts and Standard Oils of the past, the handful of companies really running the show today control the informational substrate—the algorithms and engines that shape what we see, who we talk to, how we understand reality. SpaceX, Amazon, Microsoft, Nvidia, Oracle, and Google control the infrastructure that props up the internet. OpenAI, Anthropic, Google, and Meta control the AI. Most of these companies + Oracle/TikTok control the media. Together, they’re integrated into our identities in ways that make them fundamentally harder to disentangle from. It’s like that Geiger painting at the top—a pile of interconnections that can’t be easily severed.
We’re all worried about some super-sentient AI coming around the corner and putting us out of work, and that’s probably a valid concern. Meanwhile, we’re (un)happily trusting a handful of companies with everything and giving them lots of opportunity to create further extensive reach. The US, and to a large extent, Big Tech is leading a race to the bottom, and the leaders of the world are basically shrugging and going along with it.
Right now, we still have a choice. But 10 years from now? I’m not so sure.
Fun story. My husband thought I made up extensity. I swear, I’m not that clever. I originally came across these concepts when reading Robert Greene’s 48 Laws of Power, specifically Law 11 (Learn to Keep People Dependent on You) and Law 23 (Concentrate Your Forces). Consider this my analysis of those laws.
Interestingly, Robert Greene argues the opposite in Law 23: “You gain more by finding a rich mine and mining it deeper, than by flitting from one shallow mine to another. Intensity defeats extensity every time.”
Full disclosure: My husband works for Google. I have very mixed and complicated feelings about the search quality & other frequently legitimate complaints made about Google, which is why I usually avoid including them in stuff I write. I also consult for a rival search and browser company. My point isn’t to get into the merits of Google per se, so much as to point out what I see as a larger trend across Google and similar firms.
Stats: orbit.ing-now.com. Of the 11,000 Starlink satellites, around 1,100 are in re-entry, orbital decay, or are otherwise inactive.
To put a finer point on this: It’s the distinction between the ‘show us the algorithm’ concept that a lot of lawyers/policymakers have, versus asking questions about systems, networks, and how the individual pieces of the puzzle work together. In short, there is no singular algorithm that makes up Google, or Meta, or TikTok: It’s a complicated web of algorithms, learning models, databases, individual functions, and systems. This is why engineers tend to roll their eyes when politicians continue to ask for ‘the algorithm’ during the various showboat hearings.
I avoided including ‘harmful’ in that list, because well, harm never has been much of a moderating force against capitalism. cf: smoking, guns, alcohol, gambling, prediction markets, crypto…
Needless to say, the next time someone says ‘BUT FINES’ to me, I’m going to just send this link without commentary.








great points and read. also love the h.r. giger on the art :)
This is a critical conversation we need to have. Thanks for exploring this issue.