There’s a quote I keep coming back to: “The future is already here, it’s just unevenly distributed.” I can’t think of a better description of where most organisations sit right now with AI.
Some companies are still mid-way through their digital transformation. Others are already running sophisticated AI workflows, redeploying people, and rethinking entire business models. Neither group should be judged. The gap between them isn’t really about technology, it’s about culture, leadership, and whether the people making decisions truly understand what’s possible.
That’s the conversation worth having.
Every major technology revolution has two ingredients: it feels like magic, and it becomes democratised. Cars. The internet. The smartphone. Each one felt impossible, then inevitable, then invisible. AI is no different, except for one thing. The time between revolutions is shrinking. And unlike most previous shifts, this one doesn’t require you to be a programmer to participate.
That’s the whole point of calling them large language models. Language is the most uniquely human capability we have. It’s how we pass knowledge from one generation to the next. And now, it’s also how we interact with the most powerful technology ever built. You don’t need to write code. You need to ask good questions and apply good judgment. That changes everything about who gets to be part of this shift.
I’ve been making this point for years, and it’s never been more relevant than right now. Technology can do two things:
Most established organisations default to the first. They automate existing tasks, reduce costs, tell a tidy story to the board. That’s fine. But the bigger opportunity, the one most companies are leaving on the table, is the second.
If you solve a painful problem well enough, your competitors might become your customers. That’s not a fantasy. That’s a business model.
A lot of AI systems still rely on humans behind the scenes more than people realise. I think that’s worth saying plainly, because it should create realism, not fear. The right approach is to use humans to bridge the gap between what AI can do and what your business actually needs. But be honest with yourself: that gap is closing. The amount of human intervention required is going to reduce — probably faster than you expect. When that happens, the smart organisations won’t just shrink headcount and congratulate themselves. They’ll redeploy people to higher-value work. That’s where the real productivity gains live.
One of my favourite things to talk about right now is what AI is doing to hidden talent inside organisations.
I was recently talking with a company that had an IT staff member who handled a lot of routine requests. Solid guy. Seemed stuck. Then he saw a demo of AI coding tools and came back after a weekend having built an entire application, a polished, well-authenticated tool that automated the management of supplier certificate renewals. Real problem. Real solution. Real business impact.
He wasn’t a “developer.” But he was the person closest to the problem. And AI gave him the tools to solve it.
This is happening everywhere. The people closest to your operational pain points are now able to build things. That’s exciting, and it changes who gets to contribute to the future of your business.
Here’s where I want to push you to think further ahead than most people are comfortable going.
We’re moving toward a world where AI agents don’t just assist humans, they interact with each other. Humans use agents. Corporate agents talk to other agents. An entirely new layer of the internet is being built right now, one where businesses need to structure their systems to be discoverable and usable by machines, not just people.
Think about what that means for how you’ll be found. Generative engine optimisation. How your company appears in the sources AI systems trust. How an AI agent will navigate your website or query your systems. These aren’t distant questions, they’re questions you should be asking today.
The businesses that get ahead of this won’t just have smarter workflows. They’ll be the ones the agent economy transacts with.
Let me be direct about a few things I see organisations getting wrong.
Don’t start with technology. Start with pain. The most successful AI projects I’ve seen weren’t driven by “let’s save money with AI.” They were driven by a genuinely painful operational problem that someone decided to actually fix. When you start with the problem, the technology finds its place.
Stay tool-agnostic. This space is moving too fast to get locked in. Don’t sign three-year contracts with any single AI vendor right now. Month-to-month arrangements cost more on paper but give you the flexibility to move when something better arrives — and something better will arrive.
Accountability doesn’t disappear. Using AI doesn’t remove responsibility from the person doing the work. Whether you’re a lawyer, a marketer, a coder, or a CEO, you are accountable for the quality, security, and compliance of what you produce. AI is a tool. It is not an excuse.
Governance matters from day one. When more people can build solutions, you need standards, coding conventions, deployment rules, architecture patterns, AI-assisted checks before anything goes to production. Build the guardrails early.
In times of genuine upheaval, the temptation is to react to headlines. To cut headcount so you can tell an AI story to investors. To chase the newest tool instead of solving real problems.
What I’ve seen work, in the companies that are genuinely ahead is this: they have clear principles, and they have honest partners who’ve solved similar problems elsewhere.
Principles keep you focused on outcomes instead of tactics. Partners bring context. And in a moment this fast-moving, context is everything.
The future really is already here. The question is which part of it you’re living in.
Steve Sammartino is a futurist, entrepreneur and author who speaks globally on technology, business and the future of work.