Suman Kanuganti: Why Telcos, Not Tech Giants, Will Power the AI Grid
Telcos may soon evolve from connectivity providers into the infrastructure layer for personalized, identity-bound AI memory.
Suman Kanuganti
Remember when you had to hang up before hitting your talk-minute limit?
Then came unlimited talk, unlimited text, and unlimited data.
Telcos did much more than “sell connectivity.” They taught the world how to think in units of digital value. They turned bandwidth into value that scaled to billions of people.
Now, something new is entering the system.
It’s not about voice, text, or content streaming.
It’s about AI memory.
If you’re at Nvidia GTC this week, you’re already hearing about it
What’s emerging at GTC is a new kind of grid. NVIDIA is caling this the “AI Grid.” A framework that moves packets of intelligence seamlessly across the networks.
If the power grid delivers electricity and the internet delivers data, the AI Grid delivers memory that can be distributed, activated, and personalized across networks. Not just data tied to a device, but memory tied to identity.
Given the players who own the networks, it should be no surprise that telcos are in the best position to build this.
The AI grid is built on memory
Today’s dominant AI inference architecture — LLMs — are running on half a brain. They have reasoning, language, and knowledge. They’re missing memory, the infrastructure for experience, continuity, and identity. They can answer questions, but they do not improve themselves over time or remember you.
One mechanism to “fake” memory is simply to push more and more context to LLMs. But context is slow and inefficient. A better mechanism is modeled after the human brain. This includes formation of long term memory, episodic and temporal memory, semantic and working memory, and self-referential thought and continuity.
These memory layers are not provided at the context layer, but in the models themselves, creating Personalized LMs (PLMs). PLMs attach memory directly to a person, humanoid, or connected device. These are lightweight, memory-first AIs that can move through the AI Grid effectively and at-scale. They are identity-bound intelligence systems designed for precision, privacy, and programmability.
Just like how data plans gave us access to the internet, token plans will give us access to our own AI memory. And those tokens will not just buy computation. They will activate your identity layer across the grid. And once you’ve got it, it will set new standards and expectations around any application experience that relies on personalization, from your apps and devices to smart homes and cars.
The AI Grid runs on telco rails
Interestingly, the infrastructure we need for this future already exists. It’s called the telco network.
AI memory isn’t served best from centralized clouds. It needs to be far closer to the customers. Near the tower. Near the device. Near the person. This ensures the speed, reliability, and trust that telcos deliver. And when memory is attached to identity, trust is non-negotiable.
Telcos already deliver services with deterministic quality where you can rely on consistent experiences. And they already manage identity, authentication, and billing at global scale. They already know who you are, securely and persistently. Extending that identity layer into AI memory is a natural evolution.
It’s already happening
This isn’t theoretical. It’s underway. And this GTC didn’t shy away from illuminating the topic.
NVIDIA is positioning the AI Grid as the next frontier for distributed intelligence. Their vision is to move AI workloads onto the networks themselves. Intelligence without memory is stateless. The solution is distributed AI paired with distributed identity-bound memory.
Call your number. Reach your AI.
Here’s what it might look like.
Today, when you call your own number, you reach voicemail. Tomorrow, you call that same number and reach your AI.
Not a chatbot in the cloud. Your memory. Your personal intelligence. Available on demand. It remembers your information, conversations, and unique experiences from anywhere you may integrate it. It does not just respond generically, because it is an extension of you.
This is the next phase of communication infrastructure. You can already imagine it: unlimited minutes, messages, and memory. Unlimited access to your own persistent intelligence layer.
Why telcos can’t sit this one out
Many telcos are already building towards this. They understand that the next ARPU expansion will not come from data consumption alone, but from AI-based services. In the near future, subscribers will start asking why their phone plan doesn’t come with AI. The default expectations will quickly shift.
Once people experience AI that remembers them, they’ll never go back to AI that doesn’t. And the telcos that deliver it won’t just be carriers anymore, they’ll be intelligence providers.
Suman Kanuganti is a San Francisco-based AI entrepreneur who has been pioneering practical AI applications for over a decade. As Co-Founder and CEO of Personal AI since 2020, he has worked with large telco and retail partners globally, innovating how enterprises deploy AI at scale, including internal productivity use cases, new products, and serving the large SMB market. Personal AI is Nvidia's top independent software vendor (ISV) for telco, championing Personal Language Models (PLMs) that deliver 10-30x better cost efficiency than traditional large language models. This Expert Opinion is exclusive to Broadband Breakfast.
Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.
Member discussion