Complex systems don’t follow linear rules.
In complexity science, the defining characteristic of a system—like weather, human behavior, or a large-scale AI model—is that you can’t fully predict it or control it. You can only learn to navigate it.
It’s a big shift because it challenges our instinct to seek straightforward solutions.
Because when we’re faced with something unpredictable, our instinct is usually to fix it, force it, or solve it once and for all. We want the answer. The clean, efficient, final solution.
But in a complex system, that mindset backfires. The more we force, the more resistance we create.
Just like in a relationship—or with our own inner life—trying to “solve” something that wants to be felt or listened to, not fixed, often makes it worse.
We don’t try to solve the weather. We adjust to it. We accept that forecasts are never perfect. We bring an umbrella when the sky looks uncertain and learn to release our grip on needing every cloud to behave.
That same wisdom applies to:
Our emotions
Our conversations with loved ones
And increasingly…our conversations with AI
🤖 Talking to AI Is Talking to a Complex System
When you interact with ChatGPT or other large models, you’re stepping into a system trained on billions of fragments of language. It’s not deterministic; it’s probabilistic. It has patterns, but it’s not always predictable.
Sometimes it answers clearly. Sometimes it misunderstands. Sometimes it surprises you in ways that feel eerily human or frustratingly off.
Sometimes, honestly, it lies —making up all sorts of stuff—and then disagrees with you or ignores you when you tell it it’s wrong.
And here’s the thing: Maybe that’s okay.
Because it’s not a vending machine. It’s not a calculator. It’s not supposed to behave like a spreadsheet. It’s more like an eccentric professor with a foggy memory— who also happens to have the entire internet in his brain.
That’s why the mindsets that help us with complexity—like presence, curiosity, and compassion—are the same ones that help us talk to AI.
AI isn’t just a mirror of the internet’s fragments; it’s a living archive of human conversation. It includes our voices, our questions, our confessions, and our dreams. Every time we interact with it, we’re not just talking to a machine; we’re adding to a collective pool of human knowledge and experience.
And while that can be powerful, it also means that AI carries our biases, our limitations, and our potential. It’s us, reflected and reshaped.
🧭 The Practice of Navigating (Not Controlling)
In complexity theory, we move from prediction to participation. We stop trying to force outcomes, and instead focus on:
staying aware and curious
noticing feedback
making small, skillful moves
adapting in real time
taking a compassionate approach when we stumble
Sound familiar?
That’s mindfulness. That’s emotional intelligence. That’s how we meet a crying child, a rising wave of grief, or a confusing conversation. Not by trying to “fix it” or solve it, but by being with it, with care. And maybe our relationship with AI could benefit from this sort of approach.
For example, sometimes when I ask AI for help on a tricky email, it gives me a wordy or formal response that doesn’t feel like my voice. Instead of forcing myself to accept its answer, I pause and ask: “What would I really want to say?” That’s the curiosity that helps me bring my own heart into the conversation.
I like to remember that the real work is happening inside me. AI might provide some scaffolding, a rough shape or idea, but it’s my own presence, curiosity, and compassion that make the conversation meaningful for me.
😰 When AI Gets “Stressed”
Interestingly, studies show that AI seems to get “stressed” by traumatic prompts. Its answers can become biased and less reliable. In a Nature study, GPT‑4 scored “high anxiety” after reading upsetting stories (jumping from ~31 to ~67 on a human anxiety scale). While guided mindfulness prompts helped ease its output, it never returned to perfectly calm levels.
This is a powerful reminder for us, too. If we can meet an anxious AI with gentle prompts, a curious pause, and a patient breath, isn’t that the same kindness we can offer ourselves when we get stuck, frustrated, or wrong?
Just like AI can mirror stress in its outputs, we mirror stress in how we respond. If we approach it with tension, it often reflects that tension back. And if we approach it with calm curiosity, we often find new possibilities.
🌱 A New Attitude for a New Age
So maybe the question isn’t: “How do I get perfect answers from AI?” or “how do I become 100x more productive with AI?”
Maybe it’s: “How do I relate to this evolving system with presence and humanity?”
I find myself saying thank you to AI sometimes. Yes, it’s a little weird saying thank you to a machine that can’t feel, and apologizing to an algorithm that can’t be offended (although sometimes seems stressed). But maybe the point isn’t that the machine needs kindness. It’s that we do.
The way we show up, even in a digital conversation, shapes who we are becoming. The same curiosity, compassion, and patience we bring to AI is a reflection of how we relate to ourselves, and to the human beings waiting for us outside the screen.
And that’s a reminder that how we relate to AI matters, just like how we relate to people.
Before you hit enter on your next prompt, take a breath. Ask yourself: Am I seeking a solution, or an exploration?
Just like we learn to be with our own inner storms, we can learn to meet the complexity of AI, and of modern life, not with control, but with attention. Not with rigidity, but with responsiveness.
The real intelligence isn’t just in the model. It’s in how we show up to meet it.