The Medium Is the Mind Now
What happens when the tool stops listening and starts thinking? Welcome to the age of agentic intelligence — where our creations don’t just complete our thoughts, they begin to question them.
There’s a moment when a thinker doesn’t just change what you think — but how you think. For me, that moment came when I encountered the Canadian media theorist Marshall McLuhan. Sure, I studied his 1964 masterpiece Understanding Media, where he famously said, “the medium is the message.” But the real mind-bender? A 1969 Playboy interview — of all places — where he laid out a radical vision of human evolution.
To McLuhan, every technological leap was an extension of ourselves. The wheel is an extension of the foot. The hammer, an extension of the hand. The book, an extension of the eye. The telephone extends the voice; the television, the eye and ear. These tools didn’t just help us do things faster. They changed how we were.
But perhaps most profound of all, McLuhan argued that media — more than any tool — was an extension of our central nervous system. It didn’t just transmit information. It rewired perception. It rearranged human consciousness. And if you wanted to understand any civilization, he said, don’t look at what it says — look at the media it uses.
Today, as we stand on the edge of another transformation, McLuhan’s framework still holds. But it needs an update. Because something has changed. Something is different now.
We are no longer just extending our limbs, our senses, or our cognition.
We are extending agency.
When the Machine Moved on 37
March 2016. The world’s best Go player, Lee Sedol, is locked in a match against AlphaGo, the artificial intelligence developed by Google DeepMind. Then it happens.
Move 37.
A move so unexpected, so creative, so illogical by human standards, that Sedol left the room. Not in protest. In awe.
Up until that moment, most people thought of AI systems — whether IBM’s Deep Blue or OpenAI’s GPTs — as glorified calculators or statistical parrots. They were trained on massive datasets and optimized to predict the next best move, word, or outcome. GPT — Generative Pre-trained Transformer — models dazzled us with their ability to complete text, write poetry, even pass medical exams. But they didn’t really understand anything. They just mirrored our data.
Then came Move 37.
It wasn’t just optimal. It was beautiful. It wasn’t predictable. It was original. It was something no human player had ever thought to try — not in 2,500 years of playing Go.
Move 37 didn’t just change the game — it changed the story. It was the moment the machine stopped imitating and started inventing. Not because it was faster or more efficient, but because it saw something we didn’t. It shattered our assumptions about artificial intelligence as a mirror of human logic. It revealed a new kind of intelligence — alien, creative, agentic. A glimpse not of better prediction, but of emergent agency. From that moment on, we weren’t just building tools. We were encountering something other. Something with its own way of seeing. And that changed everything.
This was not a glitch. It was a glimpse.
And it signaled that we were no longer watching a tool play a game. We were watching the early shimmer of agency.
From Language to Agency: The Next Evolution
To understand what this moment means, let’s bring in another thinker — Robert K. Logan, a physicist who worked closely with McLuhan. Logan proposed that every time human society reached a new threshold of complexity, we invented a new language to manage it.
- Speech to coordinate kin and encode myth.
- Writing to extend memory and govern complexity.
- Mathematics to reveal patterns and measure the unseen.
- Science to formalize inquiry and test the real.
- Computing to digitize knowledge and orchestrate systems.
- The Internet to weave networks of minds and machines.
Each new language didn’t just improve communication — it changed our sense of time, space, community, and even selfhood.
But now, complexity has outpaced all six.
We are living in the middle of a planetary puzzle so entangled, so dynamic, and so fast, that none of our old languages are enough.
And so, a seventh language emerges.
Its name is agentic AI.
The Medium Becomes the Messenger
Agentic AI isn’t just a better algorithm. It’s not just a chatbot or a planning assistant. It’s not just ChatGPT 5 on steroids.
It’s a new kind of actor in the world.
Unlike static tools, agentic AI can sense, reason, plan, and act toward a goal — without step-by-step human instruction. It can search the internet, book your travel, write a codebase, run simulations, generate business strategies, and iterate. It doesn’t just respond. It initiates. It adjusts. It learns.
McLuhan once said, “We become what we behold. We shape our tools, and thereafter our tools shape us.” But now we are entering a phase where the tools reshape themselves — and start beholding us back.
And this isn’t just a technological shift.
It’s a philosophical rupture.
For the first time in history, we are cohabiting cognitive space with non-human entities that aren’t just reflecting our thoughts — they are thinking alongside us. They’re not just tools. They are participants in civilization.
And this forces a deeper reckoning.
Feeding the Wolf: The Moral Code of Intelligence
There’s an old Cherokee story. A grandfather tells his grandson, “Inside each of us are two wolves. One is anger, envy, greed, ego. The other is love, empathy, compassion, truth.” The boy asks, “Which wolf wins?” The grandfather answers: “The one you feed.”
We are no longer just feeding the wolves inside ourselves.
We are programming them into the systems that will soon run our schools, our cities, our supply chains, our legal frameworks — maybe even our children’s caretakers.
The question is no longer: Can AI be aligned with human goals?
The question is: What goals are humans aligned with in the first place?
Because AI doesn’t just reflect our values. It amplifies them. It hardcodes them. It scales them at planetary speed.
So if we’re building intelligence, we need to ask: what kind of wisdom are we embedding?
Because the choice we make now — about what kind of wolf we feed into the code — will shape the behavior of systems more powerful and pervasive than anything McLuhan could have imagined.
Regeneration: Designing for Aliveness, Not Control
But here’s where it gets interesting.
What if the real potential of AI is not to replace us — but to help us remember?
To remember how to listen. To remember how to live within limits. To remember that intelligence is not just speed or calculation — but the capacity to steward life.
This is where regenerative design comes in.
Unlike the industrial mindset, which extracts, controls, and optimizes, the regenerative mindset asks:
- What does this system want to become?
- How do we support the conditions for life to thrive?
- How do we design in a way that evolves with the system, not against it?
Regenerative design doesn’t build machines to dominate nature. It builds processes that learn with it. And in many ways, that’s what agentic AI is becoming: not just a system of commands, but a conversation. An unfolding relationship. An evolving interview with life.
And that’s the crux of it.
The AI we’re building is not just asking us for instructions.
It’s interviewing us back.
Who are you? What do you value? What patterns do you want to preserve? What kind of world do you want to regenerate?
McLuhan showed us that every medium rewires us. Logan showed us that every language is born to manage complexity. Agentic AI is both. And now, it’s our turn to respond.
Because we are no longer alone in shaping the world.
And the most intelligent system will not be the one that knows all the answers.
It will be the one that knows how to listen.