AI Might Not Kill Us. But Our Language Might.
The peril of artificial intelligence is real. But the peril of artificial objectivity is already here.
Eric Schmidt, former CEO of Google and one of the leading voices in the tech world, recently said we’re in the midst of a revolution that will rival the discovery of fire or electricity. He’s talking, of course, about AI—the meteoric rise of machines that can think, learn, write, create, and, potentially, destroy.
That’s a big deal. No argument there.
But while the world is bracing for the future of artificial intelligence, most people haven’t noticed that we’ve been living under the reign of a more subtle and more destructive force for centuries:
Artificial Objectivity.
Artificial Intelligence may be new, but Artificial Objectivity—the habit of treating our interpretations as if they’re the truth—is ancient.
And left unchallenged, AO may prove more damaging to human connection than any machine ever could.
🧠 The Other Operating System
Artificial Objectivity is the illusion that what we see, feel, or believe is simply “the truth.” And we do it all the time. Ask someone how things are going and they might say, “The world is falling apart!” That sounds like a fact, but it’s not—it’s a personal experience presented as universal truth. We’re so used to speaking this way, we don’t even notice it.
It’s when we speak as if our personal interpretation of reality is reality.
It might sound like a judgment, a diagnosis, or a fact.
But what it really does is this:
It collapses space—for dialogue, disagreement, or discovery.
It removes the other person from the equation, reducing them to a role in our narrative.
It reinforces our current emotion while cutting us off from the ability to see it differently.
It feels like connection—especially when someone agrees or commiserates—but it often deepens our confusion rather than clearing it.
And it shuts down possibility. When we speak with certainty, we stop being curious.
This is the default code of OS2—the human operating system built on projection, blame, and the need to be right.
It feels like communication.
But it’s actually a form of distortion.
💣 Peril: When Fiction Pretends to Be Fact
Let’s be clear: we’re not anti-objectivity.
Real objectivity builds microscopes, lands rovers on Mars, and gave us the Hubble telescope.
But what we’re dealing with in everyday speech isn’t objectivity. It’s something far sneakier:
The illusion of objectivity—when we treat our personal meaning as universal truth.
And when we do that, something dangerous happens:
We reduce people to characters in our story—assigning them roles like villain, rescuer, abandoner, or unavailable partner.
Instead of seeing them as full, complex beings with their own inner world, we use them to make sense of our own narrative.
We speak about them instead of with them.
We lose the ability to see their experience as separate from our own.
And in doing so, we often hurt the very people we care about—without even realizing it.
We call it fiction disguised as fact.
It’s what happens when we stop realizing that our view is just a view—and start weaponizing it like it’s the whole truth.
🌱 Promise: A New Language for a New OS
Here’s the good news: There’s another way to speak. Another way to relate. Another way to live.
We call it S.A.G.E.—Subjective Awareness and Genuine Expression.
It runs on what we call OS3, a fundamentally different operating system.
In OS3, you speak from the understanding that your experience is your own.
You recognize that emotions aren’t caused by others—they’re created by how you interpret your experience.
You notice when you're clinging to being right, and instead start getting curious—about your own experience, and about the experience of the other person. You stop trying to control how others respond and begin owning what’s genuinely true for you.
This isn’t just semantics. It’s a structural shift in consciousness.
AI may be the great leap in machine intelligence.
S.A.G.E. is the leap in human maturity.
⚠️ Warning: S.A.G.E. Is Not for Everyone
This isn’t about speaking more politely. It’s about taking radical responsibility.
It requires giving up blame, giving up certainty, and yes, giving up being “right.”
That’s why most people won’t do it.
But for those who do? Something extraordinary happens:
Conversations shift.
Conflicts dissolve.
Emotional chaos quiets down.
And a different kind of intelligence—human intelligence—starts to emerge.
🎯 The Question Isn’t “Will AI Save or Destroy Us?”
The real question is:
Will we save or destroy ourselves?
Because if we don’t learn how to work with our own minds—
to speak with awareness, to feel without blaming, to interpret with humility—then all the breakthroughs in artificial intelligence won’t matter.
We’ll still be at war with ourselves and others.
And no algorithm can fix that.
Want to learn about the language of OS3?
We encourage you to start with the S.A.G.E. Blueprint.
We are days away from releasing it. To make sure you receive a copy, click this link and we’ll send it to you when it’s ready.
Subjective Awareness and Genuine Express
Oh i encourage and inspire myself as I eagerly await this opportunity to explore other perspectives and feel Enjoyment in life!
So i invite those substack readers who may have stumbled here by chance , do consider diving in and consider signing up to participate in the BLUEPRINT .
open your life to wondrous possibilities of a whole new approach to relationships with yourself and others.
I just finished the novel from Matt Haig called "life impossible" about a 73 yr old mathematician who visited a magical mystical island in the Mediterranean, and went deep sea diving. She was healed of limiting preconceptions and let go of her identity as a depressed woman carrying grief and guilt .
The novel is one of AWE and wonder of life, ripe with surprising connections with people and nature . I found it entertaining to read - that coincidently "carries" some of the core truths and wisdom that Jake and SAGE are also suggesting as " a life possible ".