The Hidden Moat in AI That NOBODY is Talking About
Think better models = better AI? Think again.
Every founder has their own favorite narrative about what makes their AI product great.
Some say it’s their vision.
Some say it’s their data.
Some say it’s their execution.
And some of the very best AI product leaders?
They love to say “taste” is the differentiator.
But behind the scenes, what really separates the winners from the rest?
Ruthless evaluation is the latest trend.
Hard, objective, relentless evaluation loops.
And in my view: Not just of the AI itself, but of the fundamental assumptions behind the product, the customers, the business model—everything.
Models are increasingly commoditized.
Infrastructure is being abstracted away.
Data advantages?
Not as defensible as they used to be.
The REAL advantage?
Knowing exactly where, when, and how your model is delivering real value—and systematically iterating until it does.
And we believe we have the right thesis:
it starts with the user,
their humanity,
and their belief systems.
Last week I wrote about hyper-personalization.
This week, I’m sharing my thoughts on this Evals trend, and how Epistemic Me is positioned to win here long term.
Because we picked the right problem.
Let’s dive in.
What's Inside This Week:
ALIGN: Climate AI or Greenwashing?
BUILD: Evals on Belief Systems
CULTURE: New Years Everywhere
🤖 ALIGN: Custom GPTs, DeepSeek Open Sourcing, and AI for Climate
A few curated links and resources of recent topics around AI, health, longevity, business and product frameworks, cool tools, and general stuff I find interesting.
Our Latest Podcast: Apes made fire from rock. Now we put intelligence into rock.
What if artificial intelligence isn’t just a tool, but a product of evolution itself?
Could AI follow the same trajectory as domesticated animals, mitochondria, or even humanity itself?
Robert’s Take: Watch/listen to the pod, and find out (;
Custom GPTs Are Replacing Plugins—And It’s Just the Beginning
OpenAI’s community discussions are highlighting a major shift: Custom GPTs are starting to replace traditional plugins. Instead of static integrations, users are gravitating toward AI that can dynamically adapt to specific needs. This signals a broader trend—users don’t just want tools, they want AI that understands them.
Robert's Take: This is hyper-personalization in action. We’re watching the transition from static software to adaptive intelligence, where AI isn’t just executing commands but actively learning from and evolving with each user. This isn’t just about convenience—it’s about AI becoming an extension of our cognition. The real question: How do we ensure these models don’t just reinforce existing beliefs, but actually help users evolve their thinking? The companies that solve that will win big.
DeepSeek Moves Toward Open-Source AI—But What’s the Catch?
DeepSeek, a fast-emerging AI player, is open-sourcing parts of its online services code, allowing developers more flexibility in integrating and modifying AI tools. What a move for open source…?
Robert's Take: The tension between open AI and controlled AI is only going to grow. The optimist in me says that open-source AI is critical for progress, but the realist in me is still waiting to see what exactly they open source. Could be PR, could be fluff. Who knows. We’ll see soon.
Microsoft’s Sparrow AI: The Greenwashing of Artificial Intelligence?
“With SPARROW’s open-source access, we’re empowering a global community of researchers and scientists to transform data collection from some of the most remote and difficult-to-reach regions by eliminating the need to physically retrieve data.”
Robert's Take: Sustainability through AI is great in theory, but optimization is not the same as transformation. If AI is just making existing inefficient systems slightly more efficient, we’re not solving the core problem—we’re just slowing down the inevitable. Real sustainability requires a fundamental shift in how we model economies, incentives, and human behavior. AI can help, but only if it’s trained on belief systems that prioritize long-term thinking over short-term gains. Otherwise, it’s just another layer of automation on a broken system.
🛠️ BUILD: Evals Based On Belief Systems Are THE Moat
AI startups today win not just by having great models, but by deeply understanding customers, the problems they face, and the economic logic of solving those problems.
But… AI models do not magically create value just because they exist.
They need to be rigorously evaluated in the wild, in the messiness of actual business contexts.
With people.
And people are damn messy.
They change, they evolve.
And that’s where many startups fail
Many startups I’m seeing…
Evaluate their models in static environments, instead of in real-world decision-making contexts.
Assume accuracy = usefulness, instead of understanding how beliefs shape adoption.
Ignore the fact that people don’t just “use” AI—instead of accepting that people form relationships with AI.
The AI companies that survive will be the ones that treat evaluation as an obsession, not an afterthought.
What’s at the center of deeply understanding customers, and the problems they face?
Something I’m calling hyper-personalization—the ability to programmatically represent a person’s beliefs, values, and evolving self in a way that allows any system to truly personalize value to the individual.
Because at the core of AI alignment—of making models truly useful—isn’t just better data or faster computation.
Gotta understand humans—and that’s HARD
It’s a deep understanding of human belief systems, and the evolving models of self that determine what people trust, adopt, and integrate into their lives.
We believe to win in AI, you need evaluation loops that go beyond technical benchmarks and measure…
Model-Level Evaluation: Precision, recall, perplexity, hallucination rates. The basics.
Task-Level Evaluation: Does it perform well in real workflows?
Belief-Level Evaluation: How does interacting with the model change what the user believes to be true?
Self-Model EVOLUTION: How does AI help users refine and evolve their own thinking over time?
Most companies, stop at 2.
We’re going to 4 with our thesis that hyper-personalization will be the ultimate AI moat.
How can we help people self-actualize?
How can we help people reach self-alignment?
We believe the problem must be solved at the belief and humanity level.
That’s why I’m excited to be building every week with Jonathan and Deen—we feel strong conviction, and it’s incredibly validating to hear these trends in the AI startup scene.
Let’s get after it.
✌🏼 CULTURE: New Years Around The World
Lunar New Year was recent, and even though we’re solidly in 2025, I thought this would be a fun share—how do different countries celebrate new years?
India: In Goa, locals craft and burn an 'Old Man' effigy on December 31st to symbolize shedding past sorrows and embracing a fresh start.
Scotland: During Hogmanay, Scots participate in fireball swinging ceremonies and 'first-footing,' where the first visitor after midnight brings gifts to ensure good luck.
Denmark: Danes leap off chairs at midnight to 'jump' into the new year and shatter unused dishes on friends' doorsteps to ward off bad spirits.
Japan: Buddhist temples ring bells 108 times to cleanse sins and purify minds, while families enjoy a bowl of soba noodles to signify resilience and longevity.
Learned so many new things reading this! I love learning about the world 😊
You can read more of how other countries celebrate here, including Colombia, Romania, the Philippines, and more—Unique New Years Traditions.
Liked this article?
💚 Click the like button.
Feedback or addition?
💬 Drop a comment.
Know someone that would find this helpful?
🔁 Share this post.
P.S. If you haven’t already checked out my other newsletter, ABCs for Growth—that’s where I have personal reflections on personal growth related to applied emotional intelligence, leadership and influence concepts, etc.
P.S.S. Want reminders on entrepreneurship, growth, leadership, empathy, and product?
Follow me on..
💪🏼 How You Can Help
What's Next?
We're building something unprecedented: kind of an operating system for human understanding and beliefs.
We’re building something unprecedented. If you’re a builder, engineer, or entrepreneur interested in the intersection of AI and human understanding, let's connect.
Check out our website: https://epistemicme.ai/
GitHub here: https://github.com/Epistemic-Me
EM acts as a hyper-personalization layer and set of services that allows you and your applications to understand your users better.
We have built a model and set of interfaces from first principles thinking in philosophy and epistemology to accurately map human belief systems.
What can that do for you?
→ Perhaps increase sales conversions.
→ Perhaps optimize copywriting in your automation funnels, depending on the user.
→ Perhaps helping researchers and scientists better quantify subjectivity in their experiments, for better science.
And… if you are looking for the “next best question” to evolve your beliefs, it could help you too.
In a few weeks we are going to rapidly get the structure to ship new features constantly, with new releases tied to this newsletter and our podcast.
Check out our first podcast on YouTube or Substack for a heavier deep dive into our “Why”.