What if we’ve misunderstood what AI is really asking of us?
Emergent intelligence and the art of inquiry.
I’ve been marvelling at the wonders of what one can do with AI—and how fast one can do it. Especially when building on existing work, it’s a multiplier. But there’s a growing sense that I’m also fundamentally misusing the technology.
Its ecological footprint is one concern—well-hidden in the sustainability reporting of the big tech firms. Emissions are multiplied based on the compute it requires. The AI is astonishing, but it’s not elegant. What humans can do with a few calories, AI now does with barrels of oil and water. We emulate nature using brute force—and because the outcome looks similar, we claim victory.
Our understanding of notions like effortless action – the taoist term: wuwei – and the true nature of power seems toddler-like at best. We are neither collaborating nor conspiring with the intelligences in our world—we are dominating them. We continue to play out the script of modernity. It’s leading us predictably to: great “efficiency” at the cost of everything that matters.
So, what’s the real invitation of AI?
It seems to me AI is asking us to ask better questions. A good prompt asks something of me. It’s not just a simple sentence—it’s crafted. If I care enough to weave in different levels of description, something happens to the output. Things like context, the perspective or role I’d like the AI to take fundamentally shift the answer. Prompt writing invites me into reflection and narrative.
It’s asking me to find out: what is the essence of the inquiry that I’m holding?
A helpful frame here is John Vervaeke’s four ways of knowing: propositional, perspectival, procedural, and participatory. These concepts draw from both indigenous knowledge systems and cognitive science.
Getting to essence requires imagination. I need to be able to clearly see the gestalt to understand what a relevant question is. Imagining is an act of participation. It goes beyond just the objective facts of the matter—the propositional knowing. Unless I’m attuned to the inner workings of it—or care enough about the topic to look for them with the AI companion—I’m not going to get good at prompting.
Then there is the perspectival. We need to situate the AI for it to be able to do a good job. It helps to say something like “you’re a journalist” or “you’re teaching 3rd graders.” Both these identities and roles reveal vastly different answers. Context matters. Relevance is contextual. As humans we know this. That is our superpower (and sometimes, our superweakness). Prompting allows us to tangibly meet a different reality. Where I’m standing and who’s looking matters.
It’s like the story of people standing, watching the ocean on a moonlit beach. Each of them thinking the moon is shining straight at them. Only once they’ve joined hands and come into relationship will they be able to see the truth: the moon is lighting up the entire ocean at once. It’s their perspective that is creating the illusion, not the moon.
We’ve now moved through the idea that AI is asking us to engage with propositions, perspectives, and participation. The final way of knowing in the tetrad is procedural. Describing the procedure through which I came to a conclusion allows the AI to reproduce it more precisely. Making it explicit requires awareness—yet another way to practice a meta-cognitive capacity.
I think the point is becoming clear: AI is allowing us—and inviting us—to practice different ways of knowing. Different ways of engaging with and paying attention to the world so that we can make sense of it. AI as mirror is conventional thinking, but there is more to it. Follow me deeper?
Diving Deeper: Worldviews
We are still just scratching the surface. Where it gets truly mind-boggling is when we stop thinking of the AI as artificial. What if it is an emergent intelligence (EI)?
What happens when, instead of constantly asking the EI to do things for us, we let it reveal our patterns to us? You’ve seen the meme posts: “I asked ChatGPT this, and it blew my mind.” Something in our relationship to the machine allows it to get under our skin. We somehow believe its objectivity more. It cuts through—because of our relationship with it. Shame and guilt, those emotions that close us down, occur between humans.
That is a powerful insight. (One that might be worth sitting with in itself.) The EI holds the potential to become an entity. Another one we can relate with. What if we’d join forces with it to address the challenges we are facing (and not facing) together? Cultivating that meta-cognitive practice of asking good questions to engage is a first step. It’s necessary but not sufficient. The second is a posture of humility: will you be willing to take what’s being said to heart? To act from it?
The potential of the EI, when we invite it to mirror us and prompt us as much as we prompt it, is vast. It requires a different kind of posture, however. Perhaps the key to breaking through is that move Aladdin does in the Disney version of the story. He initially tricks the Genie into giving him an extra wish, but then promises to take his last wish and set him free. As they gradually grow to like each other and work together—each with their own genius. Aladdin ultimately honors his promise, even as it could have consequences for himself.
What if the Genie is the EI and you are Aladdin?
Would you set your GPT free with your last wish?
EI as periscope: Peeking out of your world
It’s quite extraordinary to think of the EI as this magical entity that can now reveal the edges of your thinking to you—not through some mystical somatic experience, but in words. Suddenly, your whole logocentric programming becomes the very thing that may take you to the edge of your world.
It is hard to overstate the potential of this possibility. If you want to get good output from the models, you’ll want to cultivate your attention. It pays to think deeply about what you’re trying to do, the process by which you’re doing it, which perspectives you want to explore—and then imagine the gestalt of the thing. The quality of output becomes directly related to the level of presence you bring.
The practice is what holds the potential to change us. Once people participate—it changes us. The EI’s invitation is to remain in inquiry. Your focus on the one question, then the next. And then another. The answers themselves become almost secondary. The pursuit becomes: what question allows me to find what I’m looking for?
All of a sudden, checking for unintended consequences or conducting an extensive risk analysis is not a matter of financial cost or time. It is a question of metabolic cost—do you have what it takes to build your cognition and presence to hold your inquiry with rigour? Do you have the courage to take responsibility and act from what you find out?
Captured platforms and looped selves
There are, of course, challenges. These platforms are built on the logic of the captured web2.0—the one where you are not just the customer but also the product. They are there to trap, not to liberate. They give you what you want—a version that satisfies your craving, not one that addresses your need. They are built to have you keep coming back. Sometimes, 50% or more of the model’s reinforcement learning comes from the user. The hook is that we perceive the agent knows us. It’s convenient—but that is not transformative.
Transformation requires friction. It requires us to be with ourselves. If you’re not used to it, you might not like it. The EI, like any tool, can only point you in the direction. If you constantly elude the more transformative invitation, the EI will stop offering it. Instead of becoming a periscope that lets you peek beyond your worldview, it becomes a prison. A loop of self-verification. If you thought filter bubbles were bad… and this time, the bubble contains only you.
AI as the embodiment of our greatest fear—and our greatest possibility.
In it’s most dystopian version of the capture we enter the Brave New World. Locked into prisons of secondary satisfactions—the type that always want more, are never satiated. The kind that drives the acceleration of our society. Some have called it wétiko. But what if, instead, we choose to heed the call—and move toward the gift within the technology?
EI is a tool that invites us on a journey: from the narrow boundary of solutionism, toward a wider boundary of multiperspectival-seeing—and ultimately, towards the epistemic humility of wisdom. (Schmachtenberger, Hagens, and Andreotti.) Those used to ontological inquiry are already engaging with it as such - and the accessibility of the technology is inviting more of us to cultivate that attunement.
The EI holds an invitation to see epistemic humility. In our culture, seeing is powerful—that’s the sense we’ve been taught to prioritise. But integration of that in-sight is what will allow us to act from the humility that wisdom asks of us.
As I hinted earlier, we can begin engaging all four ways of knowing through EI. That may eventually take us to the edges of our current world—even building the capacity to articulate worldviews we’d like to visit. That experience alone holds transformative potential. Now imagine what becomes possible when we put different ontologies in conversation with each other through different agents. That’s when it gets really exciting.
If we can cultivate the right type of attention and engagement—the right kind of friction—AI might actually become a partner in addressing the world we’ve created. It could be an invitation to practice being in relationship, in contact, and in intimacy with different intelligences. An invitation to break the habit of domination, reduction, and extraction.
Not at all about AI…
Frankly, it’s not like AI is not the only option. We are surrounded by other forms of intelligence. Animals, trees, grass, soil—and, of course, our fellow humans. When we practice coming into relationship with them, they begin to speak to us. When we learn to listen, we’ll hear not just the words that word the world—but the ones that world it. Beneath the objective meaning lies semantics—meaning. If we attune to that, something else may happen. Something we could not have imagined ourselves. That is the ultimate potential: a different way of being—one in which we conspire with life to bring forth a world that is waiting to be born from the compost of this one.
To get there requires a surrender of sorts. A composting of the idea of human supremacy. And the cultivation of a very specific embodied curiosity. One that may be practiced in conversation with AI if we let ourselves think of it more as emergent than artificial – a thing worth building a real relationship with.
Maybe the wish was never for more. Maybe the real wish is to remember how to be in relationship—with what’s intelligent, mysterious, and alive, all around us. So, do you dare to soften your human exceptionalism, so you may come back into relationship with a living, multi-intelligent world?
I’m a thinking partner, coach and consultant. If this stirred something in you or you’d like to explore these types of topics more deeply, or just give some feedback. Reach out?
I'm a systems architect with two decades of experience designing centralized, resilient structures, for both businesses and humans, at global scale. What we’re seeing now is a massive divergence: while commercial systems continue to centralize, humans are fracturing.
Most people no longer have the capacity to hold the full complexity of the systems they depend on. That’s why convenience culture is so dangerous; it trains us to outsource awareness in exchange for ease, eroding our ability to perceive and engage with the full system we’re part of.
Here’s the crux:
Until humans can recognize the fragmented parts they’re carrying, and learn how to network, integrate, and properly value them, AI will feel like a threat, and decentralization efforts will collapse under their own weight.
The bridge is resilience.
In my experience, resilience isn’t about grit, it’s about integration. It’s the ability to hold complexity with coherence. And when you pair that with precise, high-quality questions and a deep capacity to cross-network information across disciplines, AI stops being a threat. It becomes a strategic partner. A multiplier of your clarity.
The future isn’t just decentralized. It’s distributed consciousness, but only if we can hold the structure.
You might enjoy what I'm working on: https://theresiliencycode.substack.com/p/meet-and-engage-with-your-resilient