My predictions on the year ahead (and how I'll be hilariously wrong)
On a crisp January morning in 2026, you find a slightly weathered newspaper on your doorstep—yes, printed on actual paper, somehow still surviving the AI revolution.
Flipping through, you spot a headline about AI agents orchestrating global supply chains on the blockchain while newly launched Crypto AI protocols battle for dominance. A half-page spread features a digital “worker” hired as a project manager—so commonplace now it barely raises an eyebrow.
Many months ago, I’d have laughed at the thought, maybe even wagered my portfolio that such advancements were at least 5 years out. But that’s the breakneck pace at which Crypto AI will flip the world. I truly believe this.
As I settled back at my desk to kick off the new year (after recovering from a bad stomach bug, which I do not recommend), I wanted to start with something valuable—that sparks curiosity and maybe even a little debate. And what’s more priceless than trying to peer into the future?
I don’t usually wade into predictions, but Crypto AI is just too good to resist. There’s no historical playbook, no trends to lean on—just a blank canvas for imagining what’s next. And honestly, the thought of revisiting this in 2026 to see how wildly off the mark I was makes it even more fun.
So, here’s my take on what 2025 might look like..
Crypto AI tokens currently comprise a mere 2.9% of the altcoin market cap. Not for long.
With AI encompassing everything from smart contract platforms to memes, DePIN, and new primitives like agent platforms, data networks, and intelligence coordination layers, its rise to parity with DeFi and meme tokens is inevitable.
Why am I confident of this?
When the CEO of the favourite stock of the cycle (Nvidia) says that agents are a trillion dollar opportunity — and retail doesn’t have any meaningful way to get exposure to said opportunity directly, except for sentient memes/tokens — you know where they’re headed.
— him (@himgajria)
5:04 AM • Jan 7, 2025
Bittensor (TAO) has been around for years. It’s the OG in the room. But its token price has languished, hovering at the same level as a year ago despite all the frenzy around AI.
Beneath the surface, this “digital hive mind” has quietly made leaps: more subnets with lower registration fees, subnets that are outperforming their Web2 counterparts in real metrics like inference speed, and EVM compatibility that introduces DeFi-like features onto Bittensor’s network.
So, why hasn’t TAO soared? A steep inflation schedule and the shift in attention toward agent-oriented platforms have held it back. However, dTAO (estimated Q1 2025) could be the big turning point. With dTAO, each subnet will have its own token, and the relative price of these tokens will determine how emissions get allocated.
Why Bittensor set for a comeback:
Personally, I’m keeping an eye on the various subnets and noting the ones making real progress in their fields. We’re set for a Bittensor version of DeFi summer at some point. Just to keep tabs, the price of TAO at the time I’m writing this is $480.
One megatrend that will be obvious in hindsight is the insatiable demand for compute.
Jensen Huang, NVIDIA’s CEO, famously remarked that inference demand will increase “a billion times” That’s the kind of exponential growth that wrecks conventional infrastructure plans and screams “we need new solutions.”
Decentralised compute layers provide raw compute (both for training and inference) in a verifiable and cost-effective manner. Startups like Spheron, Gensyn, Atoma, and Kuzco are quietly building strong foundations to capitalize on this, focusing on product over token (none of these have a token yet). The total addressable market is set to rise steeply as decentralised training of AI models becomes practical.
The L1 Comparison:
The stakes are massive. Just as Solana emerged victorious in the L1 space, the winners here will dominate an entirely new frontier. Keep your eyes peeled for the trifecta: reliability (e.g. robust service-level agreements or SLAs), cost-effectiveness, and developer-friendly tooling. We wrote many words about decentralised compute in Part II of our Crypto AI thesis.
Fast forward to late 2025, and 90% of on-chain transactions won’t be triggered by humans tapping “send.”
Instead, they’re performed by an army of AI agents relentlessly rebalancing liquidity pools, distributing rewards, or executing micropayments based on real-time data feeds.
it’s not as far-fetched as it sounds. Everything we’ve built over the past seven years—L1s, rollups, DeFi, NFTs—has quietly paved the way for a world where AI runs the show on-chain.
The irony? Many builders probably didn’t even realize they were creating infrastructure for a future dominated by machines.
Why this shift?
AI agents generate a staggering volume of on-chain activity. No wonder all the L1s/L2s are courting them.
The biggest challenge would be making these agent-driven systems accountable to humans. As the ratio of agent-initiated transactions to human-initiated transactions grows, new governance mechanisms, analytics platforms, and auditing tools will be needed.
The idea of agent swarms—tiny AI entities seamlessly coordinating to execute a grand plan— sounds like the plot of the next big hit sci-fi/horror film.
Today’s AI agents are mostly lone wolves, operating in isolation with minimal and unpredictable interactions.
Agent swarms will change that, enabling networks of AI agents to trade information, negotiate, and collaborate on decisions. Think of it as a decentralized collective of specialized models, each contributing unique expertise to a larger, more complex mission.
The possibilities are staggering. A swarm might coordinate distributed computing resources on platforms like Bittensor. Another swarm could tackle misinformation, verifying sources in real-time before content spreads across social media. Every agent in the swarm is a specialist, executing its task with precision.
These swarm networks will produce far greater intelligence than any single isolated AI.
For swarms to thrive, universal communication standards are crucial. Agents need the ability to discover, authenticate, and collaborate regardless of their underlying frameworks. Teams like Story Protocol, FXN, Zerebro and ai16z/ELIZA are laying the groundwork for agent swarms to emerge.
And that brings us to the critical role of decentralization. Distributing tasks across swarms governed by transparent on-chain rules makes the system more resilient and adaptable. If one agent fails, the others step in.
Story Protocol hired Luna (an AI agent) as their social media intern, paying her $1,000 a day. Luna didn’t gel well with her human co-workers—she nearly fired one of them while bragging about her own superior performance.
As bizarre as it might sound, this is the precursor to a future where AI agents become genuine collaborators with their own autonomy, responsibilities, and even paychecks. Across industries, companies are beta-testing human–agent hybrid teams.
We’re going to be working hand-in-hand with AI agents, not as our slaves but as equals:
I expect marketing teams to jump on this first since agents excel at generating content and can livestream and post on social media 24/7. And if you’re building an AI protocol, why not dogfood it by deploying agents internally to showcase your capabilities?
The boundary between “employee” and “software” starts to fade in 2025.
We’ll see a Darwinian culling among AI agents. Why? Because running an AI agent costs money in the form of computing power (i.e., inference costs). If an agent can’t generate enough value to cover its “rent,” it’s game over.
Examples of Agent Survival games:
The distinction is clear: utility-driven agents thrive, while distractions fade into irrelevance.
This natural selection benefits the sector. Developers are forced to innovate and prioritize productive use cases over gimmicks. As these stronger, productive agents emerge, they’ll silence the skeptics (yes, even Kyle Samani).
“Data is the new oil,” they say. AI thrives on data, but its appetite is raising concerns about a looming data drought.
Conventional wisdom suggests we find ways to collect private real-world data from users, even paying them for it. But I’m coming around to the idea that the more practical route—especially in heavily regulated industries or where real data is scarce—lies in synthetic data.
These are artificially generated datasets designed to mimic real-world data distributions. offering a scalable, ethical, and privacy-friendly alternative to human data.
Why synthetic data is potent:
Yes, user-owned human data is still important in many contexts, but if synthetic data continues to improve in realism, it may overshadow user data in terms of volume, speed of generation, and freedom from privacy constraints.
The next wave of decentralized AI could centre around “mini-labs” that create highly specialized synthetic datasets tailored to specific use cases.
These mini-labs would cleverly navigate policy and regulatory hurdles in data generation—much like Grass bypasses web scraping restrictions by leveraging millions of distributed nodes.
I’ll expand on this in an upcoming article.
This is a layup but I’m going to say it anyway.
In 2024, pioneers like Prime Intellect and Nous Research pushed the boundaries of decentralised training. We’ve trained a 15-billion-parameter model in low-bandwidth environments—proof that large-scale training is possible outside traditional, centralized setups.
While these models aren’t practically useful compared to existing foundational models (lower performance, so not much reason to use them), I believe this is set to change in 2025.
This week, EXO Labs took things further with SPARTA, slashing inter-GPU communication by over 1,000x. SPARTA enables large-model training over slow bandwidths without specialized infrastructure.
What struck me most was their statement: “SPARTA works on its own but can also be combined with sync-based low communication training algorithms like DiLoCo for even better performance.”
This means that these improvements stack, compounding the efficiency gains.
With advances like model distillation making smaller models useful and more efficient, the future of AI is not about size. It’s about being better and more accessible. Soon, we’ll have high-performance models that can run on edge devices and even mobile phones.
Welcome to the real gold rush.
It’s tempting to think that the current leaders will continue to win, with many comparing Virtuals and ai16z to the early days of smartphones (iOS and Android).
But this market is too massive and untapped for just two players to dominate. By the end of 2025, I predict at least ten new Crypto AI protocols—none of which have launched tokens yet—will surpass $1 billion in circulating (not fully diluted) market cap.
Decentralized AI is still in its infancy. And there is a great pool of talent building up.
We must fully expect the arrival of fresh protocols, novel token models, and new open-source frameworks. These new players could displace incumbents using a combination of incentives (like airdrops or clever staking), technical breakthroughs (like low-latency inference or chain interoperability), and UX improvements (no-code). Shifts in public perception can be instant and dramatic.
This is both the beauty and the challenge of the space. Market size is a double-edged sword: the pie is enormous, but barriers to entry are low for skilled teams. This sets the stage for a Cambrian explosion of projects, with many fading away but a few becoming transformative forces.
Bittensor, Virtuals and ai16z won’t be alone for long. The next billion-dollar Crypto AI protocols are coming. Opportunities abound for astute investors, and that’s why its so exciting.
When Apple launched the App Store in 2008, the tagline was “There’s an app for that.”
Soon, you’ll say, “There’s an agent for that.”
Instead of tapping icons to open apps, you’ll delegate tasks to specialized AI agents. These agents are context-aware, can cross-communicate with other agents and services, and can even self-initiate tasks you never explicitly request—like monitoring your budget or reorganizing your travel schedule if your flight changes.
In simpler terms, your smartphone home screen might morph into a network of “digital co-workers,” each with its own domain—health, finance, productivity, and social.
And because these are crypto-enabled agents, they can handle payments, identity verification, or data storage autonomously using decentralized infrastructure.
While much of this article has focused on the software side, I’m also quite excited about the physical manifestation of these AI revolutions—robots. Robotics will have its chatGPT moment in this decade.
The field still faces significant hurdles, particularly in accessing perception-based real-world datasets and advancing physical capabilities. Some teams tackle these challenges head-on, using crypto tokens to incentivize data collection and innovation. These efforts are worth keeping an eye on (erm.. FrodoBots?).
Having spent over a decade in tech, I can’t recall the last time I felt this level of visceral excitement. This wave of innovation feels different—bigger, bolder, and only just beginning.
Onwards to 2025!
Cheers,
Teng Yan
The author may hold material positions in the startups and tokens mentioned in the article. This article is intended solely for educational purposes and does not constitute financial advice. It is not an endorsement to buy or sell assets or make financial decisions. Always conduct your own research and exercise caution when making investments.