ChatGPT Health is a small part of a much larger plan to learn everything about you. In this post, I talk about what's driving them and how they might get there.
Thank you for reading, and at least confirming I don't sound like a total nutjob. This feels like something out of the dystopian sci-fi world, which is usually so far over the top that it's easy to dismiss. I"m glad I was able to communicate it in a way that was convincing.
This is an incredible deep dive into OpenAI's plans for the future! I knew that ChatGPT alone could never generate enough revenue to support the company's current operating costs and future ambitions (even with enterprise revenue), but I didn't realize how expansive their efforts already are.
While I'm sure they can extract an immense amount of value from our data, I suspect the time horizon for that will be quite long. I'm not sure how long their investors can keep footing the bill, because even reaching break-even seems like a long road, and profitability seems further still.
The time horizon is a legitimate counter to this, particularly for the BCI products. As I emphasized any number of things could tank this vision, or at least seriously narrow Altman's stated ambitions. I tried to frame this as much as a hypothesis of where things might go in ideal conditions, and right now, conditions remain (mostly) favorable for OpenAI.
Maybe the way he gins up the money is by laying out what the end goal is, and the billionaires and VCs and partnerships agree that the calculus is worth the gamble.
Also, I have no trouble buying that another company (Google, Anthropic, Meta, xAI, or one of the Chinese competitors like Alibaba or Tencent) could replicate this model.
You haven't just described a ladder. You’ve mapped the Architectural Runway for a hostile takeover of human agency.
These aren't isolated features. They are Enablers in a program increment designed to dissolve the "silos" of our privacy. This is iterative delivery weaponized as a containment strategy.
Solid forensic work in the ‘Dr Chat, GP’ section. That is the critical path risk.
Thank you! Dr. Chat, GP is what tipped me off, but as I kept digging, I realized all of this needed to be presented in order to see the gravity of the situation.
I like that phrasing -- 'a hostile takeover of human agency.' I think if things continue apace, this is the world we're coming to. It's hard to detect future risk in the best of times, and most of us are bad at either anticipating long-term consequences, or moving out of our comfort zones, until we're forced to by immediate threats.
I think it's going to be harder still when it comes to identifying how high we've gone up the AI utility ladder because there's so much money, so much hidden power, and newer, better ways to manipulate us at scale.
Thorough research!! I appreciate you mapping out the infrastructure and the intentional product sequencing. A few builder-brain thoughts: The ladder framing is powerful, but I'd want to see more on the actual friction points: How do these integrations actually work together given data residency rules, privacy regulations, and the complexity of merging disparate data types? And what does the revenue math look like in practice, does the data moat actually close their profitability gap?
Will be curious to see how the next parts address the practical implications!
Hey Jenny, thanks for the questions -- since I am admittedly speculating here, albeit with some good evidence -- I lay out a scenario in Part 2 (https://insights.priva.cat/p/the-ladder-to-nowhere-part-2-openais) which touches on many of your questions (the integrations, privacy regulations, and revenue model).
Admittedly, I didn't dig into the different data types question (and that is a reasonable point), but I also don't think it's inconceivable in an age where tools like Palantir and ever-smarter LLMs with longer context windows and memory exist. As for things like data residency and privacy regulation, I touch on that briefly. In the US, this really isn't a concern for American companies (especially given Trump's December 11 AI moratorium EO). States don't want their funding cut, California is likely to be a weak regulator, the courts move at a glacial pace of the courts, and the Supreme Court and Congress are largely water-carriers for the administration.
Even the EU, with strong and robust privacy laws seems to be no match against billionaires and Trumpian threats. Most politicians and government bureaucrats still use X, OpenAI, Google, and Microsoft (https://insights.priva.cat/p/political-grok-pocrisy). Data residency/sovereignty is much talked about, but rarely meaningfully acted on. Vendor lock in, lobbying, and inertia are powerful.
But even if the EU develops a backbone and bans OpenAI, that's 450M people. Maybe China also limits OpenAI's reach (but allows Tencent or Alibaba to replicate the same model). Data residency and the assumption that regulators will do their jobs is, IMHO a mostly polite fiction.
Privacy guy here; I am in violent agreement, Privacat. As many much smarter than me financial analysts have pointed out, there could not possibly be an ROI for AI companies based on real-world business solutions (the Bloomberg diagram was the one that identified this as a Ponzi scheme for me). The real ROI comes from the commandeering of all data. Not just personal data, but every kind of data. OpenAI for instance wants contractors to provide examples of their work and upload into ChatGPT - to ‘evaluate’ their work (https://www.wired.com/story/openai-contractor-upload-real-work-documents-ai-agents/). Right.
This is just continuing the unapologetic, untrammelled scraping of every bit of data from every individual, company, and nation that they can get - for control. The scraping of the internet and the use of AI solutions to extract both personal and business information will go down as the largest theft in the history of the world (assuming we can read histories that are not written by ChatGPT).
Thanks for the comment -- one thing I think that makes this potentially much more concerning is that I worry it won't just stop with data. It's going to go directly to our autonomy, right to chose and make decisions, and ability to exist in the world without being perpetually manipulated.
This goes much, much farther than data. We leak bits of ourselves all the time -- the very act of existing leaks information that can be used. But if my hunch is correct (and I'll stress, this is just a well-educated hunch), the key to making it more than just data will be centrality, and OpenAI's ability to put the whole picture together and do some deeply unqualifiedly bad things with that picture. I allude to it in the beginning, but part 2 will go into exactly the vision I'm thinking of.
Thank you for sharing this outside your pay wall. You did not disappoint!
Also, youre not screaming into void. At least not alone, I’m yelling “Meta wants to become Palantir for countries that cant afford Palantir!” a void over to the left from you
W/o enterprise monetization the only way this crop of Ai reaches profitability is Surveillance Capitalism, it has nowhere else to go. Add to that all the data center buildouts to catchup to China (which has data centers that go unused) and we are in a sunk cost fallacy of FOMO driven by people who run tech companies but dont actually understand tech (its own psychosis)
It’s a chat bot, a code bot, and a deepfake bot - these tools are fungible and available for free
<3 Thank you for the kind words. I generally want to make content available for free and reward the kind and generous folks who pay me with other nice perks worthy of their support.
But even if I did do paywalling - this is just too important to hide behind a wall. I get outraged over lots of stuff -- everything feels frustrating, and enraging, and futile all at the same time, but this is some true Privacy Cassandra-level forecasting, and stuff I hope people with a shred of power might actually listen to. I think I'm right on this, assuming my assumptions hold.
And that's a very scary thing to be 'right' about.
Putting all of this together and managing not to sound like a conspiracist! Love this digging and the ladder metaphor, thank you
Thank you for reading, and at least confirming I don't sound like a total nutjob. This feels like something out of the dystopian sci-fi world, which is usually so far over the top that it's easy to dismiss. I"m glad I was able to communicate it in a way that was convincing.
This is an incredible deep dive into OpenAI's plans for the future! I knew that ChatGPT alone could never generate enough revenue to support the company's current operating costs and future ambitions (even with enterprise revenue), but I didn't realize how expansive their efforts already are.
While I'm sure they can extract an immense amount of value from our data, I suspect the time horizon for that will be quite long. I'm not sure how long their investors can keep footing the bill, because even reaching break-even seems like a long road, and profitability seems further still.
Can't wait to read the rest!
The time horizon is a legitimate counter to this, particularly for the BCI products. As I emphasized any number of things could tank this vision, or at least seriously narrow Altman's stated ambitions. I tried to frame this as much as a hypothesis of where things might go in ideal conditions, and right now, conditions remain (mostly) favorable for OpenAI.
Maybe the way he gins up the money is by laying out what the end goal is, and the billionaires and VCs and partnerships agree that the calculus is worth the gamble.
Also, I have no trouble buying that another company (Google, Anthropic, Meta, xAI, or one of the Chinese competitors like Alibaba or Tencent) could replicate this model.
Yeah, a big ??? and maybe the one thing that will abort everything the fastest, is if Sam Altman can't convince VCs to keep the money flowing.
You haven't just described a ladder. You’ve mapped the Architectural Runway for a hostile takeover of human agency.
These aren't isolated features. They are Enablers in a program increment designed to dissolve the "silos" of our privacy. This is iterative delivery weaponized as a containment strategy.
Solid forensic work in the ‘Dr Chat, GP’ section. That is the critical path risk.
Thank you! Dr. Chat, GP is what tipped me off, but as I kept digging, I realized all of this needed to be presented in order to see the gravity of the situation.
I like that phrasing -- 'a hostile takeover of human agency.' I think if things continue apace, this is the world we're coming to. It's hard to detect future risk in the best of times, and most of us are bad at either anticipating long-term consequences, or moving out of our comfort zones, until we're forced to by immediate threats.
I think it's going to be harder still when it comes to identifying how high we've gone up the AI utility ladder because there's so much money, so much hidden power, and newer, better ways to manipulate us at scale.
“OpenAI is hemorrhaging money at an unsustainable pace, it spends 3x what it earns, and only 5% of its 900M users pay. “
that’s a pretty tough position to be in 95% of your customers are costing you money… ouch!
It really is! And ads will never be enough to capture those losses.
Thorough research!! I appreciate you mapping out the infrastructure and the intentional product sequencing. A few builder-brain thoughts: The ladder framing is powerful, but I'd want to see more on the actual friction points: How do these integrations actually work together given data residency rules, privacy regulations, and the complexity of merging disparate data types? And what does the revenue math look like in practice, does the data moat actually close their profitability gap?
Will be curious to see how the next parts address the practical implications!
Hey Jenny, thanks for the questions -- since I am admittedly speculating here, albeit with some good evidence -- I lay out a scenario in Part 2 (https://insights.priva.cat/p/the-ladder-to-nowhere-part-2-openais) which touches on many of your questions (the integrations, privacy regulations, and revenue model).
Admittedly, I didn't dig into the different data types question (and that is a reasonable point), but I also don't think it's inconceivable in an age where tools like Palantir and ever-smarter LLMs with longer context windows and memory exist. As for things like data residency and privacy regulation, I touch on that briefly. In the US, this really isn't a concern for American companies (especially given Trump's December 11 AI moratorium EO). States don't want their funding cut, California is likely to be a weak regulator, the courts move at a glacial pace of the courts, and the Supreme Court and Congress are largely water-carriers for the administration.
Even the EU, with strong and robust privacy laws seems to be no match against billionaires and Trumpian threats. Most politicians and government bureaucrats still use X, OpenAI, Google, and Microsoft (https://insights.priva.cat/p/political-grok-pocrisy). Data residency/sovereignty is much talked about, but rarely meaningfully acted on. Vendor lock in, lobbying, and inertia are powerful.
But even if the EU develops a backbone and bans OpenAI, that's 450M people. Maybe China also limits OpenAI's reach (but allows Tencent or Alibaba to replicate the same model). Data residency and the assumption that regulators will do their jobs is, IMHO a mostly polite fiction.
This has been a brilliant read so far, looking forward to reading the other parts.
Part 2 is out now. I would love to hear your thoughts https://insights.priva.cat/p/the-ladder-to-nowhere-part-2-openais
This certainly looks interesting. It is going to be a massive trade-off between convenience and privacy! Thanks for sharing!
Privacy guy here; I am in violent agreement, Privacat. As many much smarter than me financial analysts have pointed out, there could not possibly be an ROI for AI companies based on real-world business solutions (the Bloomberg diagram was the one that identified this as a Ponzi scheme for me). The real ROI comes from the commandeering of all data. Not just personal data, but every kind of data. OpenAI for instance wants contractors to provide examples of their work and upload into ChatGPT - to ‘evaluate’ their work (https://www.wired.com/story/openai-contractor-upload-real-work-documents-ai-agents/). Right.
This is just continuing the unapologetic, untrammelled scraping of every bit of data from every individual, company, and nation that they can get - for control. The scraping of the internet and the use of AI solutions to extract both personal and business information will go down as the largest theft in the history of the world (assuming we can read histories that are not written by ChatGPT).
Thanks for the comment -- one thing I think that makes this potentially much more concerning is that I worry it won't just stop with data. It's going to go directly to our autonomy, right to chose and make decisions, and ability to exist in the world without being perpetually manipulated.
This goes much, much farther than data. We leak bits of ourselves all the time -- the very act of existing leaks information that can be used. But if my hunch is correct (and I'll stress, this is just a well-educated hunch), the key to making it more than just data will be centrality, and OpenAI's ability to put the whole picture together and do some deeply unqualifiedly bad things with that picture. I allude to it in the beginning, but part 2 will go into exactly the vision I'm thinking of.
Thank you for sharing this outside your pay wall. You did not disappoint!
Also, youre not screaming into void. At least not alone, I’m yelling “Meta wants to become Palantir for countries that cant afford Palantir!” a void over to the left from you
W/o enterprise monetization the only way this crop of Ai reaches profitability is Surveillance Capitalism, it has nowhere else to go. Add to that all the data center buildouts to catchup to China (which has data centers that go unused) and we are in a sunk cost fallacy of FOMO driven by people who run tech companies but dont actually understand tech (its own psychosis)
It’s a chat bot, a code bot, and a deepfake bot - these tools are fungible and available for free
<3 Thank you for the kind words. I generally want to make content available for free and reward the kind and generous folks who pay me with other nice perks worthy of their support.
But even if I did do paywalling - this is just too important to hide behind a wall. I get outraged over lots of stuff -- everything feels frustrating, and enraging, and futile all at the same time, but this is some true Privacy Cassandra-level forecasting, and stuff I hope people with a shred of power might actually listen to. I think I'm right on this, assuming my assumptions hold.
And that's a very scary thing to be 'right' about.