In this episode, hosts Robert and Jonathan take us through the latest developments in Epistemic Me’s AI health coach project. The team shares insights from recent user research, product roadmaps, and live coding updates—showcasing how AI can be used to mirror user beliefs, refine recommendations, and build trust in healthcare AI models.
From discussions on how to map human beliefs to ensuring AI coaches don’t hallucinate their way into irrelevance, this episode is a must-listen for entrepreneurs, product builders, and technologists pushing the boundaries of AI.
1. The Problem with Health AI Today: Uncertainty and Personalization
Most AI health solutions today operate on generic, one-size-fits-all advice. But the biggest hurdle for health behavior change isn't knowledge—it’s uncertainty.
🗣 “People know they should sleep more, eat better, and exercise, but what stops them is uncertainty—when will I actually see the benefit?” – Robert
🔍 Takeaway: Hyper-personalization is key. By modeling belief systems, Epistemic Me can tailor health AI responses to an individual's mental models and motivations.
2. The Roadmap for a Hyper-Personalized AI Health Coach
The team outlines four major areas of development for AI-powered health coaching:
Self-Modeling AI: The AI builds a belief map of the user based on previous interactions.
Predictive Processing: Instead of static rules, AI can dynamically update and predict how user beliefs evolve over time.
Dialectic Design: AI is trained to ask the next best question to guide the user’s learning journey.
Trust and AI Alignment: AI must avoid hallucination by following structured question trees and validated responses.
🗣 “Our goal is to make sure AI doesn’t just throw random advice at you. Instead, it understands who you are and what actually matters to you.” – Jonathan
🔍 Takeaway: The future of health AI isn’t just giving advice—it’s understanding belief systems and adjusting recommendations dynamically.
3. Mapping Beliefs: The Key to AI That Feels Human
The team is developing a "belief mirror chat", a feature that reflects a user’s core health beliefs back at them, helping them recognize patterns they may not even be aware of.
🗣 "We could give people a mirror of themselves and their beliefs about the world. That could literally help people self-actualize." – Robert
🔍 Takeaway: AI that helps users see their own limiting beliefs could be the key to real, lasting behavioral change.
4. AI Alignment and Trust: Preventing AI from Going Off the Rails
AI health coaches need to be reliable, trustworthy, and predictable. The team is using LLM-powered reinforcement learning to train AI not to hallucinate while still allowing for organic, human-like interactions.
🗣 “We don't want AI to be a black box that suddenly decides you should eat only walnuts for three weeks. We need structured learning objectives that ensure predictability.” – Jonathan
🔍 Takeaway: The biggest risk in health AI is unpredictable, misleading recommendations. Epistemic Me is solving this by keeping AI responses anchored in validated belief models.
5. How This Could Change Healthcare
Beyond consumer health, this AI model could be used for private medical practices, helping doctors understand patient psychology and compliance patterns.
🗣 "Imagine a doctor being able to instantly see a patient’s belief systems around health—where they are resistant, where they’re motivated. That’s game-changing." – Robert
🔍 Takeaway: AI-powered belief mapping could help doctors, therapists, and coaches provide more effective, personalized care.
Why It Matters
“How can AI understand us if we don’t fully understand ourselves?”
We solve for this by create programmatic models of self, modeling belief systems, which we believe are the basis of defense against existential risk.
In the longevity tech space, we create tools that meet users where they are, helping them make better decisions, form healthier habits, and align with their deepest values.
Can’t get enough? Check out the companion newsletter to this podcast.
Get Involved
Epistemic Me is building the foundational tools to make this vision a reality—and we’re doing it in the open. Here’s how you can join the movement:
Check out the GitHub repo to explore our open-source SDK and start contributing.
Subscribe to the podcast for weekly insights on technology, philosophy, and the future.
Join the community. Whether you’re a developer, researcher, or someone passionate about the intersection of AI and humanity, we want to hear from you.
FAQs
Q: What is Epistemic Me?
A: It’s an open-source SDK designed to model belief systems and make AI more human-aligned.
Q: Who is this podcast for?
A: Entrepreneurs, builders, developers, researchers, and anyone who’s curious about the intersection of technology, philosophy, and personal growth. If you’ve ever wondered how to align AI with human values—or just how to understand yourself better—this is for you.
Q: How can I contribute?
A: Visit epistemicme.ai or check out our GitHub to start contributing today.
Q: Why open source? A: Transparency and collaboration are key to building tools that truly benefit humanity.
Q: Why focus on beliefs in AI?
A: Beliefs shape our understanding of the world. Modeling them enables AI to adapt to human nuances and foster shared understanding.
Q: How does Epistemic Me work?
A: Our open-source SDK uses predictive models to help developers create belief-driven, hyper-personalized solutions for applications in health, collaboration, and personal growth. Think of it as a toolkit for understanding how people think and making better tools, apps, or decisions because of it.
Q: How is this different from other AI tools?
A: Most AI tools are about predictions and automation. Epistemic Me is about understanding—building models that reflect the nuances of human thought and behavior. And it’s open source!
Q: How can I get involved?
A: Glad you asked! Check out our GitHub.
Q: Who can join?
A: Developers, philosophers, researchers, scientists and anyone passionate about the underpinnings of human beliefs, and interested in solving for AI Alignment.
Q: How to start?
A: Visit our GitHub repository, explore our documentation, and become part of a project that envisions a new frontier in belief modeling.
Q: Why open-source?
A: It’s about harnessing collective intelligence for innovation, transparency, and global community involvement in shaping belief-driven solutions.
P.S. Check out the companion newsletter to this podcast, ABCs for Building The Future, where I also share my own written perspective of building in the open and entrepreneurial lessons learned.
And if you haven’t already checked out my other newsletter, ABCs for Growth—that’s where I have personal reflections on personal growth related to applied emotional intelligence, leadership and influence concepts, etc.
P.S.S. Want reminders on entrepreneurship, growth, leadership, empathy, and product?
Follow me on..
Share this post