Talk to an LLM for five minutes and you’ll understand why people are scared. It writes like a person. It reasons - or at least performs something that looks like reasoning - like a person. It responds to your questions in plain language, and it does it fast. I get why people interact with this technology and think, “yep, that’s artificial intelligence.” And I get why the next thought is, “yep, this spells trouble for us as a species.”
But that reaction - understandable as it is - is built on a confusion that’s worth pulling apart. Because when most people say “AI” today, they’re usually talking about one specific thing: Large Language Models. LLMs are a type of AI model trained on massive amounts of text to generate text responses that are nearly indistinguishable from human-written language. And the breakthrough here isn’t the model itself. It’s the medium.
We’re a society that evolved around text as a primary means of communication. Nearly everything we do has some text component to it - speech, text messages, contracts, spreadsheets, math, code, our thoughts, our dreams. All of it is either directly or indirectly represented as text. Nearly all of our technological advancements have come from our ability to distill complex ideas into text and then share that text with others.
So it makes sense that this piece of technology in particular has captured so much attention. And why it feels like it has so much power to disrupt the world as we know it.
Just a note up front: I’m going to try to use “LLM” when I mean Large Language Model and “AI” when I mean artificial intelligence in general. That distinction matters for everything that follows.
We’ve been here before
It’s worth looking at the history of text and communication technologies, because the pattern is instructive.
The printing press. The telegraph. The telephone. The internet. Social media. The smartphone. Each of these disrupted the way we communicate and share information. Each created new industries and jobs. And each created new challenges - spam, misinformation, privacy concerns, concentration of power. But there’s a thread that runs through all of them that’s easy to miss: each one also shifted who gets to extract value from whom. The printing press didn’t just spread ideas - it broke the church’s monopoly on knowledge. The telegraph didn’t just speed up messages - it let commodity traders move faster than the goods. The internet didn’t just connect people - it created entirely new mechanisms for capturing attention and monetizing it.
LLMs are the next entry in this list, but I’d argue they sit a bit differently. They’re less of a disruption to how we communicate and more of a disruption to how we work and create value. For text-based work specifically, LLMs are highly disruptive - because of the nature of the technology itself. You built a machine that’s great at text, and you pointed it at a world that runs on text. The question, as always, is who captures the value that gets displaced.
But for other types of work, the impact is less clear. And under this lens, the anxiety people feel makes sense. It makes sense why people latch on to narratives like “learn a trade.” It makes sense why people reach for the word “apocalypse” - the disruption on the horizon touches so many aspects of our lives because so many aspects of our lives are tied to text.
Why we confuse LLMs with intelligence
This is also why we conflate LLMs with AGI, or artificial general intelligence. LLMs use a medium we understand - text - and produce outputs we can interpret - also text. This makes them feel more intelligent and more general than they actually are.
When an LLM writes a convincing essay or debugs your code or explains a concept back to you in plain language, it feels like you’re talking to something that understands. But that feeling is a product of the medium, not the machine. The model is operating in the layer of reality that we’ve spent thousands of years learning to navigate. Of course it feels powerful. We built our entire civilization on this layer.
Yes, modern LLMs can process images, generate code that interacts with tools, and operate inside multimodal pipelines. But even wrapped in those interfaces, their core power still comes from modeling and manipulating language - human-representable symbolic input and output. That’s their strength, and it’s also their ceiling. They’re exceptional at one dimension of intelligence. That’s not general. That’s specialized.
LLMs aren’t the only game in town
LLMs get the attention because they’re the easiest to build at scale - the training material is abundant (we’ve been producing text for millennia) and the results are immediately visible to anyone who can read. But other types of AI models exist for other types of work, and they’ve been quietly doing things that matter.
You can build an AI model for pretty much anything. Generating images, video, audio, sure. But also predicting protein structures that researchers have spent decades trying to solve. Optimizing supply chains. Modeling climate systems. Discovering new materials. These models are harder to train, more specialized, and less visible to the general public. But that doesn’t make them less significant. If anything, the opposite is true.
And here’s where I start to get frustrated. In 2026, just four companies - Amazon, Google, Meta, and Microsoft - are projected to spend roughly 650 billion dollars on AI infrastructure, according to Bloomberg. Goldman Sachs projects over 500 billion dollars across the broader industry. On the startup side, OpenAI, Anthropic, and xAI alone raised 86.3 billion dollars in 2025 - 38% of all AI funding that year. A huge share of this money is flowing into the infrastructure and companies powering frontier generative AI models.
In my opinion, this is going toward the least humanistically beneficial use case for AI. When someone tells me that LLM-powered tools are “changing millions of workflows daily,” that tells me almost nothing about whether it’s a net positive for humanity. It tells me it’s a net positive for productivity metrics and quarterly earnings.
Meanwhile, the AI applications that could genuinely change lives get a fraction of the funding. Total US venture funding for healthcare AI startups in 2025 was $14.2 billion - the sector’s best year since 2022, and still less than what OpenAI, Anthropic, and xAI raised by themselves. The entire healthcare AI startup ecosystem, in its best year, raised less venture capital than three frontier model companies did in theirs.
AGI as a marketing container
The term “AGI” is vague by design. The common definition - “a system that can perform any intellectual task that a human can do” - is circular enough to mean whatever the person saying it needs it to mean. It’s not clear what counts as an “intellectual task.” It’s not clear how you’d measure “any.” And that ambiguity is useful if you’re selling a vision.
When AI CEOs talk about AGI, they’re not describing a specific technical milestone. They’re describing a destination that justifies unlimited investment, unlimited data collection, and unlimited consolidation of compute power. The vaguer the goal, the longer the runway, and the more money flows before anyone has to deliver on it. Sam Altman has claimed that OpenAI knows how to build AGI and that it’s “basically clear” - while simultaneously acknowledging that AGI “has become a very sloppy term.” That’s not a contradiction if the vagueness is the point.
Technically, the more serious versions of the AGI conversation increasingly point not to one magical model, but to systems composed of many specialized components, each handling a different dimension of intelligence. And that tells you something important about where we actually are.
Remember when I said other types of AI models are harder to train, more specialized, and less visible? That’s exactly the bottleneck. We need many more specialized models. Then we need to figure out how to make them work together seamlessly. Then we need to figure out what form factor they should take - embedded in hardware? Accessible through APIs? Through interfaces we haven’t invented yet? Probably all of the above, depending on the use case.
This is a long road. Longer than the pitch decks suggest.
Do we actually need AGI?
This is the question I keep coming back to. If AGI depends on a combination of specialized models working together, and those specialized models are already useful on their own… then what exactly does wrapping them in a single “general” package buy us?
Today, the open internet is the closest thing we have to a shared intelligence. Would we trust it to solve a complex problem requiring deep reasoning? Some people do. I’d argue they probably shouldn’t. But that’s not what we’re being sold with the vision of AGI. We’re being sold a single, unified, artificial entity that can do anything any human can do.
But why do we want that? We have nothing - no human, no machine - that can do everything any human can do. And I’d argue we don’t need it. It seems like a strange allocation of resources to build something that can do everything when our day-to-day needs are pretty specific, regardless of what we’re trying to accomplish. The resources required to build such a thing are enormous and barely economically defensible when you actually think about it.
Unless, of course, the goal isn’t to serve day-to-day needs.
Who benefits?
People frame the push for AGI in all kinds of ways. Discovery. Abundance. Scientific progress. National competitiveness. Convenience. And sure, some of these motivations are genuine. But regardless of why any individual person is building toward AGI, the structural outcome of achieving it points in one direction: replacing human labor with machines. And when that’s achieved, who actually benefits?
It’s not the average person. It’s not the everyday worker. It’s the people who built it and control it. The same people who build and control the LLMs we have today.
The text layer is the most valuable part of the system because it’s the same layer where we get our information. It’s the same reason search engines and social media are so valuable. It’s the same reason newspapers and news media were so valuable before them. Control the information layer and you control the conversation.
The risk here isn’t that AGI suddenly becomes omnipotent and takes over. The risk is that increasingly capable systems get deployed inside institutions that are already deeply unequal, accelerating existing concentrations of power. You don’t need a science fiction scenario to see how this plays out. You just need to look at who’s building these systems, who’s funding them, and what incentive structures they operate within. Good intentions don’t change structural outcomes.
And when you look at those structures, the push for AGI is about control. Not just of information, but of labor, of resources, of the systems that govern daily life. Every person with true AGI at their disposal is a person with godlike capabilities and an army that doesn’t eat, sleep, or question orders. We should all ask ourselves: what kind of person is fervently investing in building that? What kind of person pours money into this - not into AI for healthcare, not into scientific research, not into safety - but into building that kind of power?
This isn’t conspiratorial. This is the historical pattern. The internet started as a DARPA project. Most transformative technologies end up in defense. The Pentagon’s FY2026 budget created a dedicated $13.4 billion line item for AI and autonomous systems - unmanned aerial vehicles, autonomous ground and maritime systems, and “supporting software.” That’s your tax dollars, and it’s the first time the DoD has broken AI out as its own budget category. Every country wants a way to fight harder with fewer of their own people involved. That’s not a theory. That’s procurement.
And the trajectory isn’t hard to extrapolate. More capable systems in fewer hands. A shrinking set of people who control the infrastructure everyone else depends on, with the tools to maintain that control, suppress dissent, and extract value at scale. Maybe occasionally wage war when they’re feeling froggy. They won’t have much to lose by trying.
The UBI fantasy
People say UBI - universal basic income - will be a necessary part of the post-AGI system. Sam Altman himself has funded UBI research and proposed an “American Equity Fund” where AI companies would contribute to a fund distributed to all citizens. The same person building the thing that displaces your job is also designing the safety net. Maybe. But let’s look at how we handle “basics” right now.
The US federal poverty line is based on a formula developed by Mollie Orshansky in 1963 - she took the cost of a minimum food budget and multiplied it by three, because the average family in 1955 spent about a third of their income on food. That multiplier has never been updated. Today, food is a much smaller share of family budgets, and the multiplier should be closer to 7.8. But it’s still 3. There is no other major economic statistic in use today that relies on 1955 data. What we define as “basic” needs in this country is, by most reasonable standards, inhumane. And yet it hasn’t been seriously challenged in decades.
On one hand, as a society, we seem completely okay with this. On the other, most people who are aware of how the poverty line is actually calculated find it absurd and unacceptable.
So here’s my question: would it be wise to expect that the system and people who perpetuate this kind of economic inequality would suddenly become more equitable when handed more power, money, and control? Or would they use that power to squeeze even more out of the system - out of you and me?
I don’t think that’s a hard question to answer.
Should you learn a trade?
Sure, if you want to. Short term, it’s a strong industry to be in. But on the longer timeline - the timeline of AGI - it’s not clear that it’s a sustainable career path either. “Learn a trade” as advice is well-intentioned, but it’s a short-term answer to a long-term question. It’s also a bit of a distraction from the more important conversation about what kind of future we’re building and who gets to decide.
The thing that still matters
There’s something brewing quietly underneath all of this that I think deserves more attention: the resurgence of humanity as an asset.
Yes, AI companies are working hard to replicate everything we do - how we talk, how we create, how we decide. And they’ll keep getting better at it. But what machines cannot replace is the social meaning people attach to real human presence, real authorship, real risk, and real relationship. A machine can generate a song. It can’t mean it. A model can write you a letter. It can’t know you. The outputs might be indistinguishable on a screen, but the thing that makes human expression matter to us was never just the output.
People want to interact with real people. They want to meet in real places. They want to know that the art on the wall, the music in the room, the words on the page came from someone who lived something and chose to share it. That’s not a technical problem to be solved. That’s a human need that exists independent of what technology can produce.
It’s a new renaissance of human value that companies can’t code away.
I’d also argue it’s our strongest defense against the kind of power consolidation I’ve been talking about. But the window for that defense is closing. Not slowly. Now. The consolidation isn’t a future event. It’s already happening. And every day we spend arguing about whether to learn a trade or learn to code is a day we’re not talking about the thing that actually matters.
So, should you learn a trade? Should you learn to code? Should you panic?
I think the better question is: what are you willing to fight for? Because that’s the conversation we should actually be having. And we’re running out of time to have it.
Recommended reading
- AlphaFold Protein Structure Database - DeepMind’s AI for predicting protein structures, and one of the best examples of AI doing something genuinely meaningful for science
- Goldman Sachs: Why AI Companies May Invest More than $500 Billion in 2026 - Wall Street’s perspective on where the money is going
- Bloomberg: How Much Is Big Tech Spending on AI? A Staggering $650 Billion in 2026 - Bloomberg’s reporting on the four largest tech companies’ combined AI infrastructure spending
- CNBC: Tech AI spending approaches $700 billion in 2026 - CNBC’s breakdown of individual company capex commitments
- 6 Charts That Show The Big AI Funding Trends Of 2025 - Crunchbase data on who’s raising money and how much
- Pentagon AI Budget Hits $13 Billion: Beyond DARPA, Who Else Is Funding? - Where US defense AI dollars are actually going
- Defense Department budget request goes hard on AI, autonomy - Defense One’s coverage of the FY2026 budget and its AI priorities
- Budget Trends and the Future of AI in U.S. Defense - Longer-term analysis of defense AI spending trajectories
- Federal AI and IT R&D Spending Analysis - Breakdown of federal civilian AI spending across agencies
- Crunchbase: Funding to AI-Related Healthcare Startups - Healthcare AI funding data for 2025, useful for comparing against LLM investment
- Sam Altman: “Reflections” - Altman’s own blog post on AGI timelines and OpenAI’s trajectory
- TIME: How Sam Altman Is Thinking About AGI and Superintelligence - Interview where Altman acknowledges AGI is a “sloppy term” while continuing to use it
- NPR: As new tech threatens jobs, Silicon Valley promotes no-strings cash aid - The overlap between people building AI and people designing the economic safety net for AI displacement
- Frontiers: AI, UBI, and Power - Symbolic Violence in the Tech Elite’s Narrative - Academic analysis of how AGI builders frame UBI as a solution to the displacement they’re creating
- SSA: Remembering Mollie Orshansky - The history of the woman who developed the US poverty thresholds and the methodology behind them
- Brookings: Why the US Needs an Improved Measure of Poverty - Why the 1963 poverty formula is wildly outdated and what it should look like today
- HHS Poverty Guidelines - The current federal poverty line numbers
- DARPA and the Internet (ARPANET) - DARPA’s own account of creating the precursor to the internet