Critical Thinking in the Conspiracy Age: The Power and Neglect of Philosophical Razors
On Philosophical Razors, Big Tech, Policy and the Consequences of System 1 Thinking
I’ve been thinking a lot about philosophical razors — rules of thumb that allow people to eliminate unlikely explanations, or avoid unnecessary actions (as Wikipedia loosely defines). Razors are extremely useful devices when applied — they allow us to cut through hype and conspiratorial thinking, and to better understand the likely decision making processes that go into systems.
You’re probably familiar with a few, even if you’re not familiar with their names:
Hanlon’s Razor: Never attribute to malice that which can be adequately explained by ignorance or stupidity.
Alder’s Razor: If something cannot be settled by experiment or observation, then it is not worthy of debate.
Einstein’s Razor: Make things as simple as possible, but no simpler.
Hitchens's Razor: That which can be asserted without evidence can be dismissed without evidence.
Occam’s Razor: As between two ideas that explain a phenomena, the simpler answer is probably the true one.
Sagan’s Standard: Extraordinary claims require extraordinary measures.
Hume’s Guillotine: What ought to be cannot be deduced from what is.
I think that these razors aren’t applied enough in the world, and in particular, when people are trying to explain choices organizations and institutions (especially tech companies) make when designing complex systems. Instead, we react with what Daniel Kahneman observed as “System 1” thinking, which is to say, fast, instinctive, emotional and often negative. We assume, almost instantaneously upon discovery that every new vector of exploitation, data grab, or failure in a system, is de facto malicious, intentional, and calculated, often with limited or no evidence. We reason that a complicated, nefarious conspiracy is the most likely explanation for a bad thing, rather than dumb human error or oversight.
Given that backdrop, it was unsurprising when, in response to everybody noticing that Google search has gotten pretty bad at… well, searching, an opinion piece popped up online with an unbelievable claim:
The piece was penned by Megan Gray, founder of GrayMatters, a law and policy shop based in Washington, DC. Her argument of course, being that this was part of some sort of insidious plan by Google to influence our behavior, designed to “harm everyone except Google” and, presumably, its advertising partners who are also in on the game. She cautioned us to remember that if you use Google “you’re getting search results that have been skewed—not to help you find what you’re looking for, but to boost the company's profits.” Her evidence? A single slide, presented quickly during an employee’s testimony at the US antitrust case brought against the company.
This piece spread like a fire. Journalists, and the “very online” amongst us re-shared Gray’s story, uncritically, across social media sites like X, Bluesky and LinkedIn. Few people mentioned that the author, until September 2021 had been the General Counsel & Vice-President, Public Policy1 for rival search firm DuckDuckGo. And importantly, nobody even tested the claim, which would have been pretty easy to do! Do a search for “children’s clothing” and see if a bunch of results for NIKOLAI-branded kids wear appears. I was annoyed how quickly everyone just accepted that this was a thing.
Three days later, Wired pulled the piece, issued an editor’s note, and erased Gray’s content entirely. “After careful review,” the editors explained, “WIRED editorial leadership has determined that the story does not meet our editorial standards. It has been removed.”
Stupid, Not Malice Got us Here
While we humans have always enjoyed a little conspiratorial thinking, I do think it’s gotten worse over the years. I’m not entirely sure that Gray’s piece, for example, would have carried as much weight in, say, 2013. I think the reason it’s gotten worse is a combination of the following factors:
We don’t know what we don’t know. And even if we know that we don’t know something, we forget that fact when we hear alarming stuff, even in the absence of evidence or proof.
Our lives are increasingly impacted by technology and critical systems, often in ways we don’t understand or realize.
Most of these systems and technologies are complicated, fractally-complex and there isn’t enough time in the day for any of us to begin to understand it, or how these systems affect us.
These technologies and systems are usually owned by organizations who are opaque, or at least who aren’t as transparent as they should be. They usually aren’t good at Einstein’s Razor. This is particularly true with large organizations, where information silos are more likely.2
System 2-type thinking (slow, deliberative, and more logical) is hard. It’s much easier to be outraged, especially when that outrage supports our priors (e.g., “Big Tech Bad" — aka, Hanlon’s Razor).
There are bad people in the world. They do bad stuff, and we’re evolutionarily primed to remember bad stuff more than good stuff, or to associate stupid mistakes with malice, so our priors about people, organizations, and systems skew negative.
Humans do lots of stupid things. Way more stupid things than intentionally bad or malicious things, but that distinction tends to fade into the background. See points 5-6.
Groups of humans especially those in large systems do even more stupid things, at scale, because of points 1-5.
Tech companies (and governments, and financial institutions, and, and, and) suffer the fate of large systems: they tend to have well-meaning folks who don’t have a full picture of the environments they operate in. People and teams make choices, in silos, aren’t always transparent or clear about those choices, and sometimes those choices are ill-informed or just plain dumb. Oftentimes, decisions are made without being fully thought out or tested. To be fair, in most cases, nobody has a full picture of the environment, the time to write documentation explaining everything, or the ability or resources to identify the risks and test the outcomes of a given decision. But when a decision ends up with a negative outcome, we assume malice most of the time.
Sometimes though, organizations and people also do intentionally malicious, unethical, or downright illegal things. That’s why Purdue Pharma reached a $8.3 billion dollar settlement with the government, why Bernie Madoff sat in a jail cell until his death in 2021, and why Sam Bankman-Fried is on trial. And various tech companies (including Google, Facebook, and TikTok) have also done some malicious things. This leads to us having negative priors about that organization.
But the vast majority of stuff that makes headlines, and informs our cynical views about the world and these organizations aren’t malicious. They’re just stupid and we aren’t applying philosophical razors to shave down the extraordinary claims and get to truth.
Stupid Assumptions Breed Stupid Outcomes
What’s frustrating to me is that Points 6 and 7 drive not just reactions, but also outcomes, and in particular, legislative decisions and regulations. And those outcomes have effects not just on the (perceived) bad guys, but on everyone.
For every Snowden revelation that drives oversight and attempts to curb legitimate national security abuses, we’ve got a law like Canada’s C-18 (which forces social media companies to pay publishers if Canadian news articles are shared). For every revenge porn bill in the US, we’ve got FOSTA and SESTA and Texas' censorship bill SB-5. These laws don’t really curb bad acts, but they do kill free speech, stifle the exchange of ideas, and will make us all dumber in the long run.
In Europe, points 6-7 have led to an increase in laws and proposed measures — including the Digital Services Act, Digital Markets Act, and the proposed AI Act and Chat Control bills. While the laws (mostly) mean well, they underlie a fundamental lack of clue by legislators regarding how technology and complex systems work. Critically, these laws, and the outrage that propel them forward, will end up causing more harm than they will ever prevent. Requirements in the law that aren’t grounded in actually understanding the complexity of the things they’re trying to legislate against, mean that simplistic solutions and assumptions are made, even when compliance is impossible. Failure to apply Hume’s Guillotine means that what ought to be dominates and subsumes what’s actually possible actually good for society-at-large.
I wish I had suggestions on how to overcome the System 1 thinking driving so much of the overarching decisions from governments. Part of why I started writing the Chronicles was to highlight these problems and come up with solutions. But these problems, like the systems that define our lives, are complicated, complex, many faceted, and require us all to take a pause and recognize that we probably don’t know what we don’t know.
One suggestion I have for my readers is to maybe to have a little humility and grace when it comes to organizational fuck-ups, and to remember the philosophical razors when we read alarming headlines. I don’t assume, for example, nefarious intent by (most) legislatures3 or even most tech companies. Most are, at the operational and even strategic levels, run by well-meaning people who simply lack an appreciation of the fractal complexity of the systems they create or run. Nine times out of ten, that big scandal tends to be an “oops, we didn’t realize that this test field existed, or that flag was turned on,” or “oh, we overlooked the consequences here.” Most of the time, it really is just failing to think about how a choice may have impact on stakeholders because their reality isn’t reflected by the participants making the decisions in the first place.
That’s not intentional. Our goal, as experts, advocates, advisors, and informed citizens, should be to bring awareness to decision makers early and often. That’s one of the biggest reasons I mostly gave up on being a compliance monkey and have begun really digging into systems to help drive positive change. It’s also why I write here.
Author’s note: I realize I’ve been a little slow in terms of output over the last month or so. Much of that is because I was in the Netherlands for the month of September (it was great!), and I have been deep in the weeds of a very complicated, thorny, and exigent client matter. It’s my hope that the client takes some of my insights to heart, evolves their product to address risks I’ve surfaced, and is transparent about that process with its stakeholders.
I may write a bit more about this work, pending client approval, because it’s been a real brain-breaking thing, and one of the most energizing projects I’ve ever been part of in a long time. If you’re working on a project that you think might break my brain, hit me up at [email protected].
Separately, I’m on Bluesky (privacat.bsky.social) and really enjoy it. If you would like an invite code, leave a comment and tell me what you might want to see more of/what you liked/what you hated/your favorite hobby or whatever. Just something so I can tell you read the damn post. I have five codes, and I’ll leave this bit up until they’re gone.
Charlie Warzel, over at The Atlantic was one of the few voices of reason here, and started to dig in and make sense of what was an extraordinary claim. His piece is worth a read.
This lack of transparency itself often has legitimate grounds (legal limitations on disclosure, fear of lawsuits/IP theft, national security considerations, the reality that documentation is hard). That said, I’m not naive — sometimes it’s just because organizations have made an intentional choice not to be transparent.
Granted, the nihilistic and generally destructive policies coming out of the Republican Party, especially in Texas and Florida, are an exception. There’s a lot of just purely naked hate and malice coming out of these circles.
Great take Carey. Hope you liked us Dutchies!