AI in 2026: the good surprises, the bad ones, and the ones nobody's talking about
A clear-eyed look at where AI actually stands today, and what will surprise us — in both directions — over the next eighteen months.
Fig 1.1: Moody lab and server-room scene with an operator at analytics monitors and a distorted face on a side display
I keep a running document of predictions I’ve made about AI. It’s mostly useful as a record of how wrong you can be while still thinking you’re paying close attention.
So take what follows with that caveat in mind. These aren’t confident forecasts. They’re the things I find myself thinking about when I’m trying to be honest about where this technology is going — the outcomes that seem underappreciated, either because they’re too good or too bad to take seriously.
Where we actually are
The thing people get wrong about the current moment is scope. We’re not in a hype cycle that will deflate. But we’re also not in an inflection point that will look, in retrospect, as dramatic as it feels right now.
What’s actually happened over the last two years: reasoning models got real. Not “smarter chatbots” — something qualitatively different. A model that can sit with a hard problem, generate intermediate steps, check its own work, and arrive at an answer that would have required a specialist eighteen months ago. That shift is large and is still being absorbed.
Separately, the cost of inference collapsed. The same capability that cost hundreds of dollars per million tokens in 2023 costs a few dollars now. This doesn’t sound like news, but it changes which applications are economically viable. A lot of things that were demos are now products.
And agents — software that uses models to take sequences of actions, not just answer questions — moved from toy projects to something with genuine deployment. Carefully, with a lot of human supervision. But moved.
That’s where we are. Not AGI. Not a bubble. Something in between that has real consequences and is still poorly understood.
The good surprises

Science is moving faster than the coverage suggests
AlphaFold was the headline, but the follow-on work is where it gets interesting. AI-assisted drug discovery isn’t a buzzword anymore — there are compounds in clinical trials right now that would not exist without the specific capability to predict protein structure at scale. The results will start materializing publicly over the next year or two. Some of them will fail. Some won’t.
The same dynamic is playing out, quieter, in materials science. Battery chemistry, solar cell efficiency, catalyst design for industrial processes — areas where the search space is enormous and traditional methods are exhaustingly slow. AI-assisted search is compressing timelines that were measured in decades.
I’m not saying this will definitely produce a fusion breakthrough by 2027. I am saying there are specific, concrete reasons to think we’ll see published results in the next eighteen months that surprise people who haven’t been watching these fields closely.

Models will become cheap enough to actually democratize something
Every generation of technology claims democratization and mostly delivers it to the people who already have resources. AI is mostly following that pattern. But the cost curves here are steep in a way that’s different.
When frontier-quality inference costs a fraction of a cent per call, the barrier shifts from money to knowing what to build. A single developer — or a small team — can wire together capabilities that genuinely would have required an enterprise contract three years ago. This is already happening. The products that come out of it in 2026 and 2027 will be weirder and more varied than what large companies build, because individuals building for themselves have different tolerances for weirdness.
Some of that will be bad. Some will be genuinely useful in ways that big companies, optimizing for broad markets, wouldn’t have found.
AI will catch bugs that kill people
This is a specific claim. Reasoning models applied to software verification — finding edge cases in medical device firmware, in infrastructure control systems, in financial settlement logic — are going to find real vulnerabilities that human review missed. Some of those vulnerabilities would have eventually caused serious harm.
This is already starting. It doesn’t get covered much because it’s hard to write a story about a disaster that didn’t happen. But the security research community is aware of it, and the results over the next year will be hard to ignore.
The bad surprises

Agentic AI will have a serious security incident
This is the thing I worry about most, and it’s almost certainly a matter of when rather than if.
When you give a model permission to browse the web, execute code, send emails, and take actions on your behalf, you’ve created an attack surface that barely existed before. Prompt injection — hiding instructions in content the model reads, telling it to do something the user didn’t intend — is already possible. Against current agent deployments, with their limited permissions and human oversight, it’s mostly annoying. Against agents that have access to financial accounts, corporate systems, or sensitive data, it could be catastrophic.
The security community has been sounding this alarm. The deployment community has mostly been moving fast anyway. At some point those two trajectories will intersect badly.
The energy situation is worse than people think
Data centers for AI training and inference are consuming power at a rate that, when you see the actual numbers, is difficult to process. Not as a percentage of global energy use — it’s still small there — but as a rate of increase against fixed grid capacity in specific geographies.
The contracts being signed right now between hyperscalers and power producers involve generation capacity that will take years to actually come online. Until it does, AI workloads are competing with everything else for power in ways that will surface as real problems: price spikes, reliability issues, and in some regions, tradeoffs with decarbonization commitments that were made without AI’s trajectory in mind.
This isn’t a reason to stop building AI. It’s a reason to be clearer-eyed about the infrastructure required to run it, and to be suspicious of claims that efficiency gains will outpace demand growth. They haven’t yet.
Synthetic media will get through the defenses
Detection tools for AI-generated content exist, but they’re not reliable enough, and the gap between generation quality and detection accuracy is widening. This matters specifically for two things: fraud and political manipulation.
The fraud case is almost already there. Deepfakes used in financial scams, voice cloning used in social engineering attacks — these are reported incidents, not hypotheticals. As generation quality improves and costs drop, the volume will increase. Organizations that handle sensitive information are not prepared for this at the scale it’s coming.
The political manipulation case is harder to assess because it’s harder to observe. But the conditions for it are there, and the 2026 election cycles in multiple countries will provide the first real-scale test.
Model quality may plateau in ways companies aren’t admitting
There’s a version of the next two years where the capability improvements slow significantly, because the easy gains from scaling compute have been taken, and the next gains require either fundamentally different architectures or training data that doesn’t exist in the necessary quantity.
The labs are very good at communicating upward progress curves. They’re not required to communicate when progress is slower than expected. Watch for: longer gaps between major model releases, benchmark results that become increasingly narrow rather than broadly impressive, and a quiet pivot in marketing language from “what this model can do” to “how efficiently this model does it.”
This isn’t a disaster scenario. A plateau at current capability levels still represents an enormous amount of useful technology. But it would reshape the investment thesis that’s currently driving the sector, and the adjustment would not be gentle.
The honest summary is that the near future of AI is probably neither as good as the optimists expect nor as manageable as the skeptics believe.
The science applications will produce real results that currently seem speculative. The security and energy problems are real and people are underestimating how quickly they’ll need to be addressed. And somewhere in the middle, a large number of people are building things that will quietly change how specific industries and institutions function — not with a dramatic announcement, just by getting cheaper and more capable until the cost of not using them exceeds the cost of adapting.
I’ll update this document when I’m wrong.