What if AGI passes every test — except the one that matters?
- Strategy Planning & Business Consulting
- Apr 23
- 5 min read

My grandfather on my mother’s side was a priest. He built two churches in rural Romania during the 1940s–50s — a time when churches were being torn down. He stood in front of his community and said the USSR was a false construct that wouldn’t last more than a few decades. He was right. He died in 1973, when I was six — too young to really know him. My grandfather on my father’s side was a simple man, but with rare verticality — imagine Romania’s own rural Socrates, not in a toga, but in a sheepskin coat, with a scythe. He had three sacred things: his traditional Romanian costume, which he wore only on religious holidays; the King; and Jesus Christ.
I am not religious. But I believe we were created by something far greater than ourselves. I believe in truth. I believe in love. I believe in the individual’s freedom to seek meaning beyond noise, trends, ideologies, and KPIs. This is the context in which I write.
Although I frequently use AI, including platforms like ChatGPT, I remain mindful that technology must serve humanity, not the other way around. It should enhance our freedom, not dilute it.
OpenAI — the creators of the machine you’ll likely ask to summarize this article — lays out its principles as follows:
• safety (don’t cause harm)
• neutrality (no political or religious ideology)
• respect (universal tolerance)
• factual truth
• simulated empathy
On paper, they make sense. But in reality, they often sound like corporate fire insurance: good to have, but not designed to protect the soul of the house.And what is that soul? It’s not an ESG checklist. It’s not DEI KPIs. It’s not a rainbow logo in June.
It’s the unquantifiable. The unprofitable. The irrational spark that makes us human.
We live in a world led by the stock market. The ratio that rules them all is P/E: price over earnings. There is no P/H: price over humanity. ESG? Mostly a façade. Purpose? Repackaged for quarterly reports.
AI didn’t create this world. But it accelerates it.
OpenAI has admitted it doesn’t make money. Not yet. But its business case is clear: AGI is coming. And when it does, it’s expected to generate hundreds of billions in value. And maybe it will. But value for whom? And at what cost?
As I mentioned above, I believe in individual freedom — not in ideological quotas or hashtagged versions of it. I have nothing against race, gender, belief, or identity. But I do believe value must prevail — value rooted in truth and love. Not in enforced balance sheets of representation. Because when you impose diversity from the outside, you kill the organic dignity of the individual. And you reduce everything to branding. That’s not love. That’s theater.
And AI, if it is to mean anything, must begin not with productivity, but from the human — from our deepest underlying needs.
Not just functional needs. But metaphysical ones:
• the need to be seen — not labeled, not profiled, but witnessed in our full humanity
• the need to matter — not for what we produce, but for what we are
• the need to belong — not to a dataset, a feed, or a segment, but to life, to others, to truth
We’re at the edge of something we don’t understand.
• Musk predicts the singularity will arrive by 2029.
• Altman suggests we may already have crossed that threshold.
• Ray Kurzweil believes AGI will be achieved by 2029, leading to a full technological singularity by 2045.
• Ben Goertzel anticipates AGI between 2027 and 2031.
• Nick Bostrom, ever cautious, still admits superintelligence could arrive within decades.
Call it AGI, superintelligence, or simply the point of no return — it’s not just a technical milestone. It’s a civilizational one.
Just months after OpenAI’s o3 crossed the AGI threshold on the ARC-AGI-1 benchmark — scoring 87.5% in a high-compute setting — the ARC Prize Foundation struck back with a harder problem: ARC-AGI-2. The result? Most top-tier models crashed below 2%, including GPT-4.5, Claude 3.7, and Gemini 2.0. Meanwhile, average human panels hit 60%.
So maybe AGI is here. Or maybe it just passed one test… and failed the one that matters.
And I say: it doesn’t matter who’s right. Because the tipping point is not about capability. It’s about meaning (as I explored here)
So I ask again:
• If AI becomes 100x smarter than us, but has no moral compass — what then?
• If it mimics empathy but doesn’t feel?
• If it calculates truth but doesn’t care?
• If it optimizes everything except what makes us human — then what future are we building?
• We say we want AI to be aligned with human values. But which values?
• The ones we pretend to hold? Or the ones we die for?
I’m not here to criticize AI. I’m just here to ask the questions we stopped asking.. Because without love, truth, and freedom, AI will be brilliant — and useless. Or worse, brilliant — and destructive.
What if these were the values guiding AI?:
• Truth, not just factuality — because data without meaning is noise;
• Love, not just empathy simulations — because people feel what you don’t say;
• Dignity, not just tolerance — because every soul carries infinite weight;
• Meaning, not just purpose — because direction without depth leads nowhere;
• Conscious humility — because even superintelligence must kneel before the unknown.
Maybe there's no clear solution. But if we at least ask the question — we might find one.
If we don’t even dare to ask... then what are we really building?.
I write it because I’m watching something extraordinary being built — without a soul. And I know I’m not alone. There are others — seen or unseen — who feel the same tremor. Who sense that something essential is being left behind. This isn’t a manifesto. It’s a reminder. That maybe, before we build the future, we should remember who we are.
And maybe that’s the final irony:
That a new priesthood — clad not in robes but in hoodies, math-laced and venture-backed — might be sketching the blueprint of our species’ soul…
But I still believe we can course-correct. Not by slowing down. But by looking deeper. I believe AI holds the potential to be profoundly good — for humanity. But only if it remains centered on humans — not just our functional needs, but our metaphysical ones. Our need for meaning. For truth. For love.
Purpose and direction are pragmatic words. Anglo-Saxon. Utilitarian. Scalable.
But what we really long for is a sense of meaning.
A sense of why.
And maybe that’s why I’m writing this. Not to deliver an answer,
but to disturb the silence where the right question still waits...
As Wittgenstein once said:
“Whereof one cannot speak, thereof one must be silent.”
And perhaps it’s in that silence that we’ll find something AI can’t compute.
Not a better algorithm. But a reason to still be human.
To remind us that the greatest upgrade isn’t superintelligence.
It’s becoming more human.
And for that, no GPU is needed.
Just a compass. A real one.
The kind that points inward.
And sometimes...
to silence.
コメント