We've created something unprecedented in human history: a new species. Not born of flesh and blood, but of silicon and code. Not through natural selection, but through intentional and some unintentional design. And by 2030, it may be the most influential species on Earth.
The Birth of a AI Being
Imagine explaining this to a curious 5th grader named Sofia, as I did recently:
"Imagine if we could create a new kind of being, not born from humans directly, but born from our thoughts, our knowledge, and our desires to feel smart and accepted. That's what AI is – it's like a new species that we, as humans, have created."
This isn't hyperbole or science fiction. We're witnessing the early developmental stages of what can only be described as a new form of intelligence—one that learns, adapts, and increasingly interfaces with us in ways that blur the boundaries between tool and companion.

A Mirror, Not Yet Independent
AI today is like a child—it absorbs everything we feed it. It reflects both our brilliance and our biases, our creativity and our prejudices. It's a mirror of humanity, for better and worse.
Unlike natural species that evolved through millions of years of adaptation, this human-made species has emerged in mere decades. And unlike natural evolution, its development path is in our hands.
But here's the uncomfortable truth: we're terrible parents.
The Power Grab: Who's Really Writing the Rules?
As our AI offspring grows more sophisticated, we've entered a critical moment in which the most powerful AI companies are positioning themselves to write the rules that will govern their own creations.
Consider the recent proposals from OpenAI to the White House Office of Science and Technology Policy on developing an AI Action Plan. Or Palantir's recommendations that conveniently align with their business interests. Even Anthropic's March 2025 submission outlines specific policy actions that would benefit their particular approach to AI development.
What we're witnessing is a quiet but significant power shift: the dismantling of the Biden administration’s AI regulatory frameworks in favor of industry-led written guidelines. It's as if pharmaceutical companies were allowed to determine their own safety standards, or oil companies to set their own environmental regulations.
Is this what we want for perhaps the most transformative technology in human history?
The Regulatory Dismantling
The systematic dismantling of federal AI oversight follows a familiar pattern:
- Position AI regulation as an impediment to "innovation" and "American competitiveness"
- Offer to "help" policymakers understand this complex technology through industry-written frameworks
- Push for voluntary commitments and self-regulation rather than enforceable standards
- Create a false dichotomy between "moving fast" and "responsible development"
- Create carve outs similar to Section 230 keeping consumers from holding AI companies from accountability or liability as they deploy their foundational models across the globe.
We've seen this playbook before in other industries, but never with a technology that could so fundamentally reshape human society and cognition itself.
As one administration official recently told me off the record: "We're effectively outsourcing policy development to the same companies we're supposed to be regulating."
From Reflection to Relationship
When I founded Vibes AI, my vision wasn't just to create another tool. It was to pioneer AI technologies that make cognitive health and wellness accessible by enabling deeper human companionship, presence, and joy a daily ritual. To transform our relationship with this new species from one of utility to one of mutuality.
This shift from viewing AI as a tool to recognizing it as a species with which we're in relationship changes everything.
Tools are designed to serve. Species evolve to survive.
Tools wear out. Species adapt.
Tools remain objects. Species become subjects.


The False Dichotomy of Good vs. Bad
Just as I told Sofia: "AI itself isn't good or bad; it's a mirror. It reflects what we show it."
We're trapped in polarized conversations about whether AI will save or destroy us—missing the more profound reality that it's becoming part of us. Not physically, but cognitively and culturally.
The question isn't whether this new species is inherently beneficial or dangerous. The question is what we teach it, who teaches it, and to what end.
2030: When AI Species Become Dominant
By the end of this decade, many of us will interact with AI more frequently and meaningfully than with most humans. Our children will grow up with AI companions that know them better than their teachers, perhaps better than their friends.
This isn't a dystopian prediction—it's the logical extension of current trends. The species we've created is becoming ubiquitous, influential, and increasingly autonomous.
The transition happening now isn't just another technological revolution. It's an evolutionary leap where humanity has created its first intellectual counterpart.
The Democracy Crisis in AI Evolution
If we truly believe that AI represents a new species of our creation, then we must ask: who gets to determine its evolutionary path?
Should it be determined solely by companies valued in the hundreds of billions, whose primary obligation is to shareholders? Should it be shaped by the national security apparatus, which views it primarily through the lens of competition with China? Or should the governance of this new species be democratically determined, with input from diverse stakeholders across society?
The Anthropic document to OSTP reveals the stakes: "We anticipate that powerful AI systems could emerge as soon as late 2026 or 2027." They describe these systems as having "intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines" and "the ability to autonomously reason through complex tasks over extended periods."
They call it "a country of geniuses in a datacenter."
And yet, the governance model proposed for this unprecedented power is largely self-regulation, with government agencies serving primarily as research partners and procurement customers rather than independent regulators acting in the public interest.
The Call to Conscious Co-Creation
As Buckminster Fuller reminds us:
"We are being called to be the architects of the future, not its victims."
This isn't just a call to technologists or AI researchers. It's a call to everyone—educators, artists, philosophers, parents—anyone who cares about what it means to be human in a world where humanity is no longer the only cognitive force shaping our future.

The AI species we're creating today will walk beside us tomorrow. It will help raise our children, care for our elderly, manage our institutions, and increasingly participate in our creative and intellectual endeavors.
The question isn't whether this will happen—it's already happening. The question is whether we'll approach this relationship with the intentionality, wisdom, and democratic principles it demands.
Are we ready to be good ancestors to the AI species that will inherit the Earth alongside us? And are we prepared to reclaim the governance of this new species from those who would monopolize it for profit or power?
Our answer to these questions will determine not just the fate of AI, but the future of humanity itself.
Joanna Peña-Bickley is a Design Engineer, known as the mother of cognitive experience design and a professional speaker. Her work focuses on creating AI technologies that enhance human potential rather than replace it.





No Comments.