0:00
/
0:00
Transcript

AI Context Engineering = AI Personalization Engineering

Context Engineering is the new hotness, but real context requires belief modeling...

Can an AI truly understand you if it doesn’t understand your beliefs?

That’s the provocative question guiding Episode 23 of ABCs for Building the Future. In a sweeping conversation that blends cognitive science, product design, and philosophical rigor, hosts Robert and Jonathan introduce a bold new framing for personalizing AI: Epistemic Evals.

This blog distills the episode’s most compelling insights for founders, developers, and health-tech innovators who want to move beyond token-level tuning and toward true understanding—at scale.


🎙️ Context: Building AI That Understands People—Not Just Prompts

This week’s episode is a build-in-public deep dive into Robert and Jonathan’s latest developments on their open-source SDK, Epistemic Me. Their mission? Make belief systems a first-class citizen in AI alignment and personalization.

Core themes include:

  • The emerging category of “epistemic evals”

  • The three layers of memory in personalized AI agents

  • How beliefs determine alignment and behavior change

  • A live product demo of their self-modeling evaluation system

  • Mapping philosophical complexity to practical AI architecture


🧭 1. Why AI Needs to Start With Belief Systems

“We don’t think you can solve hyper-personalization or AI alignment without modeling a user’s belief system.” — Robert

The team starts with a powerful claim: personalization begins at the belief level. If an AI doesn’t understand what you believe—about health, money, relationships, or yourself—it cannot make meaningful recommendations. And without alignment on beliefs, there can be no trustworthy AI.

Robert and Jonathan argue that the most effective AI agents will be belief-adaptive, not just data-reactive. This goes beyond tone and formatting preferences; it’s about modeling how a user sees the world and shaping responses accordingly.

🔍 Application: AI agents in health, education, or coaching domains should model user belief systems over time—not just answer questions on demand.


🧠 2. Introducing Epistemic Evals: The Top Layer of Alignment

“Epistemic evals happen at the belief and self-model level. They’re how we measure if the AI actually understands the user.” — Jonathan

Traditional application-level evaluations (e.g., “Did the agent return the right recipe?”) aren’t enough for deeply personal domains. Enter epistemic evals—a new category for evaluating how well an AI models a user’s worldview, belief system, and internal logic.

Inspired by user-centric evaluation papers and grounded in neuroscience (e.g., Friston’s free energy principle), epistemic evals look at:

  • The agent’s representation of a user’s belief states

  • How beliefs affect perceived recommendation efficacy

  • Whether the agent's suggestions align with the user's internal model of causality

📊 Application: Use epistemic evals to unlock truly personalized agents for longevity, mental health, or financial coaching.


🧱 3. Building With Memory: Working, Episodic, Semantic

“You can’t personalize without memory—and not just one kind of memory.” — Jonathan

In a standout section, Jonathan lays out their AI architecture, adapted from human cognition:

  • Working Memory: Recent chat turns, current session inputs.

  • Episodic Memory: Personal user events and past experience (e.g., “last time I felt like this”).

  • Semantic Memory: General knowledge + structured belief systems (e.g., “I believe fasting boosts clarity”).

By treating belief systems as dynamic, timestamped objects within this memory stack, they’re able to surface beliefs that are relevant to the current user query. The result: responses that feel less canned, more attuned.

🧠 Application: Personalization doesn’t scale unless you have a structured memory framework. Start there.


🎯 4. From Evaluation to Recommendation: A New Loop for Personalization

“It all comes down to recommendations. And those have to match the user’s causal worldview.” — Jonathan

Using a live demo of their self-management agent, the team shows how epistemic evals map directly to behavior change. They’ve built a layered feedback loop:

  • Input (user query)

  • Context (beliefs + past states)

  • Output (response + recommendation)

  • Evaluation (did it fit the user’s belief model?)

This evaluation loop isn’t just for accuracy—it’s for empathy. By understanding what users value and how they think change happens, agents can recommend plans users are more likely to follow.

⚙️ Application: Whether you're building a coaching bot or customer success agent, measure success by belief-congruent recommendations, not just click-through rates.


🔗 Resources & Further Reading

Why Epistemic Me Matters

“How can AI understand us if we don’t fully understand ourselves?”

We solve for this by create programmatic models of self, modeling belief systems, which we believe are the basis of defense against existential risk.

In the longevity tech space, we create tools that meet users where they are, helping them make better decisions, form healthier habits, and align with their deepest values.


ABCs for Building The Future is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Get Involved

Epistemic Me is building the foundational tools to make this vision a reality—and we’re doing it in the open. Here’s how you can join the movement:

  • Check out the GitHub repo to explore our open-source SDK and start contributing.

  • Subscribe to the podcast for weekly insights on technology, philosophy, and the future.

  • Join the community. Whether you’re a developer, researcher, or someone passionate about the intersection of AI and humanity, we want to hear from you. Email me anytime!


FAQs

Q: Why does this matter for AI?
A: Because without shared values, we can’t align AI. Belief systems that scale and unify are essential to building tools that serve humanity, not destroy it.

Q: What is Epistemic Me?

A: It’s an open-source SDK designed to model belief systems and make AI more human-aligned.

Q: Who is this podcast for?

A: Entrepreneurs, builders, developers, researchers, and anyone who’s curious about the intersection of technology, philosophy, and personal growth. If you’ve ever wondered how to align AI with human values—or just how to understand yourself better—this is for you.

Q: How can I contribute?

A: Visit epistemicme.ai or check out our GitHub to start contributing today.

Q: Why open source? A: Transparency and collaboration are key to building tools that truly benefit humanity.

Q: Why focus on beliefs in AI?
A: Beliefs shape our understanding of the world. Modeling them enables AI to adapt to human nuances and foster shared understanding.

Q: How does Epistemic Me work?

A: Our open-source SDK uses predictive models to help developers create belief-driven, hyper-personalized solutions for applications in health, collaboration, and personal growth. Think of it as a toolkit for understanding how people think and making better tools, apps, or decisions because of it.

Q: How is this different from other AI tools?

A: Most AI tools are about predictions and automation. Epistemic Me is about understanding—building models that reflect the nuances of human thought and behavior. And it’s open source!

Q: How can I get involved?

A: Glad you asked! Check out our GitHub.

Q: Who can join?

A: Developers, philosophers, researchers, scientists and anyone passionate about the underpinnings of human beliefs, and interested in solving for AI Alignment.

Q: How to start?

A: Visit our GitHub repository, explore our documentation, and become part of a project that envisions a new frontier in belief modeling.

Q: Why open-source?

A: It’s about harnessing collective intelligence for innovation, transparency, and global community involvement in shaping belief-driven solutions.


P.S. If you haven’t already checked out my other newsletter, ABCs for Growth—that’s where I have personal reflections on personal growth related to applied emotional intelligence, leadership and influence concepts, etc.

P.S.S. Want reminders on entrepreneurship, growth, leadership, empathy, and product?

Follow me on..

YouTube

Threads

Twitter

LinkedIn

Discussion about this video