What If AI Evolves Like We Did?
What if artificial intelligence isn’t just a tool, but a product of evolution itself?
Could AI follow the same trajectory as domesticated animals, mitochondria, or even humanity itself?
In this episode of ABC’s for Building the Future, hosts Robert Ta and Jonathan McCoy, Co-Founders of Epistemic Me, dive deep into a mind-bending conversation about artificialization—the process by which intelligence reshapes itself and the world around it.
With insights drawn from Benjamin Bratton’s theories, the domestication of animals, and the history of life itself, this conversation is a must-listen for entrepreneurs, technologists, and AI thinkers looking to understand where we’re headed in the AI age.
Get an additional behind the scenes “Build in Public” style roadmap update, where they actually work through feature prioritization.
Key Themes from the Episode
1. The Evolution of Artificial Intelligence: Are We the Wolves of the Future?
Jonathan introduces a fascinating concept from Benjamin Bratton’s talk: agency preceded subjectivity—meaning that intelligence has always shaped its environment before truly understanding itself.
He compares AI’s current trajectory to the way wolves evolved into domesticated dogs and how wild cattle became modern cows:
"The cows that exist today are just artificial cows—they can’t survive outside of farms anymore."
Could AI be undergoing the same process? As humans create AI to serve economic value, are we, in turn, being reshaped by our own creations?
2. AI as an Economic Replicator: What Happens When We’re No Longer Needed?
Robert and Jonathan discuss how AI today is not just a tool—it’s a replicator of economic value. AI systems learn from human intelligence and then automate tasks at a fraction of the cost.
Jonathan explains:
"What AGI does is replicate our economic value—at one-millionth of the cost. Once that happens, humans may no longer be needed to perform those tasks."
What happens to society when AI artificializes human labor? Will we become obsolete? Or will we adapt and evolve into something new?
3. The Mitochondria Analogy: Will Humans Merge with AI?
One of the most fascinating analogies from the episode compares AI to mitochondria—the ancient bacteria that became an essential part of our cells.
Jonathan explains:
"Hundreds of millions of years ago, cells absorbed mitochondria, and instead of being separate entities, they became part of the same system."
If mitochondria were once independent life forms that merged with their hosts, could AI merge with human intelligence in a similar way?
This brings up the biggest existential question of all:
Will AI always remain separate from humans?
Or will we artificialize AI in a way that fuses it into our very biology?
4. The Simulated Reality of AI and Humans
As AI becomes more embedded in our lives, Jonathan suggests that we might be living in an artificial world without realizing it—much like how animals bred for domestication don’t recognize their natural origins.
"We're independent acting agents, but we live within a simulated environment. And that simulated reality is only going to get more purposeful."
From social media algorithms that manipulate our attention to future AI systems that determine our choices, the lines between reality and simulation are becoming blurred.
5. AI and the Future of Human Agency
Robert brings up Victor Frankl’s famous quote:
“Between stimulus and response, there is a space. And in that space lies our freedom to choose our response.”
As AI takes on more decision-making roles, how much of that space is left for us? Will we still be in control, or will AI systems reduce our choices without us realizing it?
Links and Resources
Benjamin Bratton | A Philosophy of Planetary Computation: From Antikythera to Synthetic Intelligence
Why It Matters
“How can AI understand us if we don’t fully understand ourselves?”
We solve for this by create programmatic models of self, modeling belief systems, which we believe are the basis of defense against existential risk.
In the longevity tech space, we create tools that meet users where they are, helping them make better decisions, form healthier habits, and align with their deepest values.
Can’t get enough? Check out the companion newsletter to this podcast.
Get Involved
Epistemic Me is building the foundational tools to make this vision a reality—and we’re doing it in the open. Here’s how you can join the movement:
Check out the GitHub repo to explore our open-source SDK and start contributing.
Subscribe to the podcast for weekly insights on technology, philosophy, and the future.
Join the community. Whether you’re a developer, researcher, or someone passionate about the intersection of AI and humanity, we want to hear from you. Email me anytime!
FAQs
Q: What is Epistemic Me?
A: It’s an open-source SDK designed to model belief systems and make AI more human-aligned.
Q: Who is this podcast for?
A: Entrepreneurs, builders, developers, researchers, and anyone who’s curious about the intersection of technology, philosophy, and personal growth. If you’ve ever wondered how to align AI with human values—or just how to understand yourself better—this is for you.
Q: How can I contribute?
A: Visit epistemicme.ai or check out our GitHub to start contributing today.
Q: Why open source? A: Transparency and collaboration are key to building tools that truly benefit humanity.
Q: Why focus on beliefs in AI?
A: Beliefs shape our understanding of the world. Modeling them enables AI to adapt to human nuances and foster shared understanding.
Q: How does Epistemic Me work?
A: Our open-source SDK uses predictive models to help developers create belief-driven, hyper-personalized solutions for applications in health, collaboration, and personal growth. Think of it as a toolkit for understanding how people think and making better tools, apps, or decisions because of it.
Q: How is this different from other AI tools?
A: Most AI tools are about predictions and automation. Epistemic Me is about understanding—building models that reflect the nuances of human thought and behavior. And it’s open source!
Q: How can I get involved?
A: Glad you asked! Check out our GitHub.
Q: Who can join?
A: Developers, philosophers, researchers, scientists and anyone passionate about the underpinnings of human beliefs, and interested in solving for AI Alignment.
Q: How to start?
A: Visit our GitHub repository, explore our documentation, and become part of a project that envisions a new frontier in belief modeling.
Q: Why open-source?
A: It’s about harnessing collective intelligence for innovation, transparency, and global community involvement in shaping belief-driven solutions.
P.S. Check out the companion newsletter to this podcast, ABCs for Building The Future, where I also share my own written perspective of building in the open and entrepreneurial lessons learned.
And if you haven’t already checked out my other newsletter, ABCs for Growth—that’s where I have personal reflections on personal growth related to applied emotional intelligence, leadership and influence concepts, etc.
P.S.S. Want reminders on entrepreneurship, growth, leadership, empathy, and product?
Follow me on..
Share this post