Menu
header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future


17 years      OF Archives

WEEKLY COMMENTARY (AUDIO, 4 MIN.)

VISUAL PRESENTATION

DISCLAIMER

The commentaries we share here are merely our thoughts and reflections at the time of their writing. They are never our final word about any topic, nor they necessarily guide our professional work. 

 

Why people say, “That’s impossible” when they should say “I don’t know how do it”?

A few years ago, we decided to replace a couple of doors inside the house with sliding doors. We consulted someone who had helped us with other remodeling projects, and their response was, “That can’t be done.” As we later discovered, what this person should have said was, “I don’t know how to do it.”

There are many similar examples of situations where, whether consciously or not, we project our own limitations and ignorance, mistakenly assuming that if we don’t know how to do something or can’t do it, then no one else can either. This was exactly the case with the contractor I just mentioned—when, in fact, there was clear evidence that it could be done.

On a humorous note, these kinds of situations reminded me of what often happens in cartoons, when a character only starts to fall into the void after realizing the law of gravity exists—or when they suddenly become aware that they’re in mid-air and about to fall, as if ignoring the laws of nature could somehow suspend them.

But projecting our ignorance onto others and imposing that reality on them has serious consequences in real life. Unlike in cartoons, where no matter how high the character falls from or what they crash into, they bounce right back up, in the real world the outcomes aren’t so forgiving. In other words, believing something is impossible is often enough to make it become impossible.

For example, before 1954, numerous athletics experts believed the human body was simply not capable—nor would it ever be—of running a mile (1,600 meters) in under four minutes. It was even considered a “natural barrier” that no athlete would ever overcome. That is, until British runner Roger Bannister broke it on May 6, 1954.

That day, Bannister (who later went on to have a successful career as a neurologist) completed the mile in 3 minutes and 59.4 seconds. But what’s even more remarkable is that his record only stood for 46 days, until Australian John Landy lowered it to 3:57.9. Today, the record belongs to Moroccan runner Hicham El Guerrouj, who ran 3:43.13 on July 7, 1999, in Rome.

But how and why was Bannister able to surpass what seemed insurmountable? Because he didn’t buy into the belief that the so-called barrier was truly unbreakable. Bannister is often credited with the saying, “The man who can drive himself further once the effort gets painful is the man who will win.” In other words, by not internalizing the narrative of the unbreakable limit, he was able to transcend that limit.

Bannister’s attitude made me think of an idea from the ancient Stoics—the idea that obstacles are merely illusions, or if you prefer, forms of self-deception imposed on us by others, or self-imposed when we accept limiting narratives that we cling to as immovable, unquestioned descriptions of reality.

This quote is attributed to Marcus Aurelius:

“The impediment to action advances action. What stands in the way becomes the way.”

When Reason Sleeps, the Monsters Awaken

Goya: “El sueño de la razón produce monstruos” (Public domain)

There was a time—not so long ago—when every now and then a story would surface that was so unexpected, so distinct from the rest, that it invited the reader to pause, ponder, and perhaps even share it. Today, in an age when every piece of news is designed only to trigger a flicker of attention and a quick reaction, such moments of genuine discovery have become painfully rare.

Yet, they still exist.

Take, for instance, the recent announcement that from August 15 to 17, 2025, Beijing will host the first-ever World Humanoid Robot Sports Games—essentially, an Olympics for robots. This event is worth more than just a passing glance. It deserves deep reflection, not only because it’s something new, but because it signals a profound shift: once again, a space once reserved for human beings is no longer exclusively ours.

Yes, it’s clear that this “Olympics” is, at heart, a promotional showcase for cutting-edge technology—an effort to push new products into the market. But still, humanoid robots created by the world’s most powerful corporations are expected to compete in events modeled after traditional human sports: races, gymnastics, even soccer.

Which raises the question: How long will it be before human Olympic Games include humanoid participants? And how long after that before human athletes are replaced—or pushed aside—by their robotic counterparts, just as machines are already doing in fields ranging from repetitive labor to the most creative endeavors?

What we’re witnessing is a real-world situation that, not long ago, could only be imagined in science fiction. But today, the boundary between fiction and reality has become so thin, so entangled, that it’s increasingly difficult to tell where one ends and the other begins. And when that line blurs, when reality disguises itself as fantasy and fantasy takes root in reality, reason—the human capacity to think clearly—begins to fall asleep. And when reason sleeps, we begin to dream monsters.

This is not a new insight.

As far back as 1799, Spanish painter Francisco de Goya captured it in plate 43 of his haunting series Los Caprichos. The image, titled The Sleep of Reason Produces Monsters, shows a man slumped over his desk, head resting on folded arms, as he is surrounded by eerie, nightmarish creatures—perhaps figments of a dream, or perhaps something darker, more monstrous.

But what kind of “sleep” was Goya referring to? Some suggest he was depicting literal dreams or daydreams. Others—perhaps more perceptively—believe Goya was warning about what happens when reason itself, our ability to understand the world and act responsibly within it, falls dormant.

Today, our “sleeping reason”—our failure to reach our true potential despite having access to astonishing technologies—is being magnified by artificial intelligence. As cognitive scientist John Vervaeke warns, AI may be eroding our autonomy not by overpowering our reason, but by dulling it—by encouraging habits of irrationality that distance us from the very essence of what it means to be human.

The abyss between What We Dream to be and What We Show to be

In these times when screens replace reality and social media profiles stand in for our identity—when every action is posted, and every experience is monetized—we slowly lose our connection to who we are and who we long to become.


By anchoring our identity in the number of “likes” we receive, we reduce our being to whatever fits the new Procrustean bed—now digitalized—shaping ourselves according to what shapes us in the moment. In doing so, we push aside the deep longing to become what we once hoped to be in order to bring meaning and direction to our lives.
 

Long forgotten is that ancient call from someone named Saul of Tarsus, urging us not to conform to the dominant molds of any era, but to be transformed through the constant renewal of our awareness.
 

Because, ultimately, that self that flows with life—the one born of deep questions without answers, rooted in authentic values and aspirations—disappears when replaced by another kind of identity: an idealized version of the self that exists solely to please others and to grab attention.
 

This is no longer about a legitimate desire to grow or to share the good in our lives with others, hoping they too will flourish. Instead, we live focused entirely on projecting an image that will be accepted—regardless of whether that image has anything to do with our reality.
 

We’re not suggesting a return to the past, much less that the past was somehow better. That would be self-deception. What we’re suggesting is becoming aware that instead of growing inward, we’ve scattered ourselves outward, posting images that leave our inner lives increasingly hollow.
 

By constantly repeating gestures, phrases, or styles just because they perform well online, we end up behaving as if life itself were a never-ending self-promotion campaign. The pursuit of approval becomes routine, and with it, the aspirational self—that part of us that invites change, even if uncomfortable—is pushed into a corner.
 

Even more troubling, over time, we may forget what we once dreamed or aspired to become, to the point of confusing the edited (and published) image with the real person. As we begin to live according to external expectations—wearing the masks of others, as Parker Palmer would say—we become yet another simulation in a society flooded with simulations, as Baudrillard warned.
 

What once seemed like success becomes a burden, and what looked like connection turns into isolation.
 

But not all is lost. Returning to the aspirational self doesn’t require turning off your phone or deleting your accounts. It simply requires pausing for a moment and asking yourself: Am I choosing what I show, or just copying it? Does this version of myself help me grow?
 

That’s why reclaiming the aspirational self means allowing ourselves to be unfinished and imperfect—but authentic. In a world saturated with simulators, rediscovering who we truly want to be is an act of courage—and the first step toward a life with meaning.

 

When Silence Speaks, A New Life Begins

We live in a time when chaos and noise seem to cover every part of our lives — bad news, shallow opinions, constant crises, and rapid, disorienting change. In the middle of this whirlwind, it’s only natural to feel fear, confusion, frustration, or even resistance to anything that hints at "change."
 

In moments like these, words often fall short — or, as British philosopher Tim Freke puts it, they become “irrelevant.” Sometimes speaking too much doesn’t open doors; it closes them.
 

And I’m not talking about doors to business deals or new opportunities — I’m talking about the deeper portals that lead to our future.
 

That’s why today, just as every time I write or speak, I’m not offering you theories or solutions.
I don’t have them — and to be honest, I never have. All I can offer is a simple invitation: to create a space — whether within yourself or shared with others — where silence is allowed to speak, and words are allowed to fall away.

 

When the noise inside and around us begins to quiet down, when we stop clinging to our “certainties” and self-imposed limiting narratives, something new begins to emerge. It’s not something we can force or manufacture.
It rises naturally, like a hidden spring, from the open mind and the open heart.

 

In openness, we allow new life to begin.
 

But stepping into that openness isn’t easy. Between what we know and what is just starting to show itself, there’s an uncertain space — a space of ambiguity. It’s not the firm ground of the familiar, nor the blind leap into the unknown. It’s a threshold — a place where the old and the new brush against each other, sometimes clashing, sometimes embracing.
 

In that ambiguity, we allow the old and the new to meet.
 

And in that delicate, luminous meeting, we need more than intellectual understanding.
We need faith — not in the sense of adopting a dogma or joining a group, but the deep kind of faith that connects us to life itself. A trust that something greater is already at work.

 

In faith, we allow the new life to become a living truth.
 

The greatest transformations often begin in the smallest of ways — with a silence that dares to listen,
with an openness that dares to trust, with a heart that, even trembling, dares to believe that something beautiful is already on its way.

 

We need “islands of coherence” — as scientists like Ilya Prigogine and thinkers like Otto Scharmer describe them — small spaces of hope in the middle of the chaos. Places where we don’t waste energy fighting the old or denying the pain of the present but instead tend to the seeds of the new — seeds that are already quietly breaking through the soil.
 

We don’t have to understand it all to take the first step. All we need is to open ourselves to the possibility of a fuller, brighter, more authentic life that is trying to emerge through us, here and now, in the silence between words.

 

Interwoven News Stories Reveal New Dimensions of Our Consciousness

In the frenzied, fast-paced rhythm of today’s news cycle—what Walter Ong once described as “pumping data at high speed through information pipelines”—stories overlap and pile up without offering direction or purpose, and often without any meaningful context beyond novelty or entertainment. But there are exceptions.
 

Recently, for instance, a report emerged based on an article in the journal The Astrophysical Journal Letters, revealing that the planet K2-18b—located 124 light-years from Earth—might be a habitable water-covered world. According to researchers from the Institute of Astronomy at the University of Cambridge, it could host liquid water across its surface.
 

More specifically, the scientists detected “the most promising signs yet of a possible biosignature” on that exoplanet. In plain terms, life—likely microbial—might exist or might once have existed on K2-18b.
 

Almost simultaneously, another headline reported that experts from Google’s DeepMind division declared that artificial intelligence has now grown “beyond human knowledge.” In their presentation, Welcome to the Era of Experience, researchers David Silver and Richard Sutton argued that AI will develop “incredible new capabilities” once it begins learning through experiences and interactions.
 

Meanwhile, yet another report detailed how two scientists from the University of California, San Diego, identified the “rules” the brain uses to form memories. The most significant rule? The brain adapts these rules to determine how neurons communicate based on what is being learned.
 

According to researchers William Wright and Takaki Komiyama, the brain’s billions of neurons simultaneously apply several different sets of learning rules. This allows the brain to encode new information “with greater precision.” This, they say, is how memory is formed.
 

Taken together, these three stories (and others like them) make it clear that humanity is now measuring times and distances—both natural and artificial—that are wildly disproportionate to our capacity for understanding. They render our human existence small, fleeting, and nearly irrelevant.
 

These ideas resonate with the work of contemporary philosopher Benjamin Cain, who explores the notion of deep time—a scale of time so vast it exceeds human comprehension yet constantly surrounds us like an impersonal abyss.
 

Similarly, philosopher Tim Morton discusses the existence of hyperobjects—entities so massive in temporal and spatial dimensions that they escape the scale of human cognition. They cannot be fully visualized, located, or sensed through ordinary means or even our most advanced technologies.
 

And in a 2015 paper, Greek researchers Helen Lazaratou and Dimitris Anagnostopoulos introduced the idea of transgenerational objects—psychological constructs unconsciously passed from one generation to the next, shaping the thoughts, behaviors, and emotions of multiple generations.
 

If Deep Time reveals the sacred vastness of our universe, Hyperobjects reveal the unseen mesh we’re embedded in, and Transgenerational Objects reveal the hidden stories we carry, then we are, indeed, on the edge of consciously seeing deeper and wider.
 

So, we are left with a profound question: Will we learn to live—and co-live—within this new spacetime entanglement and psychohistorical depth? Or will we stubbornly cling to a separate, autonomous “self”

 

The Lack of Good Questions Disconnects Us from the New Future

In a recent interview, Spanish philosopher Juan Carlos Ruiz stated, “Nobody teaches us how to ask questions.” He then expanded on this idea, explaining that we lack a “pedagogy of the question” and, as a consequence, we also lack an ethics of dialogue—a key element for connecting with the emerging future.

As the eminent Brazilian educator and philosopher Paulo Freire noted last century, our educational systems have placed so much emphasis on answers that they’ve neglected the (perhaps even greater) importance of questions. While answers may demonstrate a degree of knowledge, questions generate new knowledge.
 

In our current era, as Ruiz points out, the situation has become even more serious. After so many decades of prioritizing answers, the rise of artificial intelligence has blurred the lines between “getting answers” and “gaining knowledge.” But this process often skips the personal transformation that comes from engaging with new knowledge.
 

This ease and speed of access to answers, Ruiz suggests, limits (and I would add, hinders) the expansion of our language. It leads to what he calls a “lexical poverty,” which in turn “often degenerates into cognitive poverty.”
This brings us to a timely quote from Wittgenstein: “The limits of my language mean the limits of my world.” (Tractatus, 5.6). For Wittgenstein, language is a mediator between us and reality (the world), whether in the context of formal logic (Tractatus, 1921) or within shared social practices (Philosophical Investigations, 1953).

 

When we stop asking questions, when we only seek answers, when vocabulary and understanding diminish, and when propositional knowledge (as John Vervaeke puts it) or what Ruiz calls the “declarative dynamic” is overemphasized, our world becomes narrower. Other ways of knowing—through processes, perspectives, and participation—are abandoned.
 

Vervaeke describes this condition as the “tyranny of propositions”—a mindset in which truth is reduced to the correct articulation of data (“Rome is the capital of Italy”), without questioning our ability to understand that data, its context and relevance, or our relationship to the community from which that data arises.
 

In short, we become disconnected from reality because we turn into spectators of our own lives, lacking the ability to rebalance our systems of knowledge. Without that rebalancing, we remain stuck in fragmentation—a state Vervaeke famously describes as “the meaning crisis.”
 

Freire advocated for an education rooted in curiosity, critical thinking, and above all, dialogue. None of this is new—Socrates was practicing it 2,400 years ago. But this isn’t about returning to the past or recreating it in the present; it’s about moving away from the shortcuts and superficialities that dominate today’s culture (think short social media videos).
 

If the future depends on our capacity to ask questions—and if no one is teaching us how to do that—then perhaps we need to return to the enduring questions of the past that are still relevant today. Starting, perhaps, with one of the most existentially iconic and paralyzing questions: “To be, or not to be: that is the question.” Let’s try it. 

 

We Anthropomorphize AI and Robotize Humans

I recently read an article that analyzes two trends: the number of older adults worldwide is increasing, and simultaneously, more and more people of all ages are feeling lonely. The confluence of these two trends means that social isolation among older adults is inevitable, according to a recent study published by the American Sociological Association.
 

Globally, according to the World Health Organization, one in four (25%) older adults lack meaningful social relationships, and four in 10 (40%) have no consistent companionship in their lives.
 

Furthermore, according to the Gallup pollster, 20% to 33% of people globally feel lonely or experience loneliness, with those under 24 being the most affected. In the United States, 52% of adults report “feeling lonely regularly,” according to the American Psychiatric Association.
 

But these two trends, already worrying in themselves, seem to converge with a third growing trend: the anthropomorphization of interactions with artificial intelligence—that is, attributing human qualities to the responses generated by AI and, therefore, reacting emotionally as if a human had responded.
 

According to Carmen Sánchez, a Spanish philosopher and educator, the anthropomorphization of AI is a “major philosophical problem” in our time. It consists of believing that “because (AI) returns correct and appropriate linguistic constructions, we are actually participating in a meaningful dialogue.”
 

In other words, we are so detached and isolated from ourselves that we no longer even recognize ourselves when we look in the mirror of our own creations. In the context of our loneliness and the overwhelming need to satisfy our desire to speak to someone, we even believe we are speaking to someone when in reality we are not. 
 

In a recent publication, Sánchez provides solid philosophical foundations (John Austin's philosophy of language, John Searle's philosophy of mind) to refute "the idea that computational systems possess a true mind or intentionality in linguistic communication." Therefore, "The attribution of understanding is also erroneous."
 

Our loneliness and isolation have reached such a level that, as Sánchez explains, we confuse "the generation of coherent text" with speaking to another person. More specifically, we confuse "the appearance of a phenomenon with its underlying reality" by attributing conscious and intentional acts to AI. And this confusion has consequences.
 

We so desire someone to listen to us that we not only accept the simulation as reality (Plato, Jean Baudrillard), but we also become emotionally and cognitively attached to that simulation, enjoying it when the AI "says" (in quotes) "That's a very good question" or "That way of expressing yourself is very beautiful." 
 

In other words, loneliness and isolation create a suitable context for deceiving ourselves into thinking we're not alone. When we acritically accept the AI simulation as part of (or the totality of) our reality, we are close to acritically accepting any other simulation just because it “tells” us how intelligent and deserving we are. 
 

That's why I keep writing, because I still want to express my own thoughts, feelings, emotions, dreams, frustrations, successes, and failures, not those of some unknown algorithm.

 

Forgotten Humanity? AI, Ethics, and the Crisis of Decision

We live in a world of such rapid technological advancement that the landscape of reality around us changes long before we’re able to understand that change or adapt to the new reality. In this context, the challenge arises of analyzing whether new technologies are compatible with our human faculties for making decisions on our own.

There’s no doubt that artificial intelligence systems are becoming increasingly sophisticated and, as leading experts like Shelly Palmer and John Vervaeke have warned, synthetic humans and general AI seem to be just around the corner. As Hong Kong philosopher Yuk Hui aptly states, it sounds like science fiction, but it’s not.

Because of these advancements, AI and its offshoots now appear capable of making decisions that once belonged exclusively to human judgment, whether in medicine, education, justice, or many other fields. This shift raises a fundamental question: Who is really deciding when a machine appears to decide for us?

One could argue that these systems have the potential to complement—and therefore improve—human decision-making by reducing the impact of our cognitive biases and enhancing the efficiency and effectiveness of decision-making processes.

But it can also be argued that the absence of human emotions and the inability of these systems to navigate the subtle complexities of ethics raise serious concerns about the potential unintended consequences of outsourcing our decisions to AI—as well as the seemingly inevitable erosion of human agency.

Scholars from many disciplines have long grappled with the ethical risks associated with AI decision-making. The core philosophical dilemma lies in the fact that AI systems, despite their advanced capabilities, lack inherent human qualities such as empathy, moral reasoning, and the ability to consider the broader social implications of their choices.

So, to what extent can we consider these systems responsible for their decisions—or for the consequences of their decisions?

One of the main concerns is the potential for AI systems to make decisions that conflict with human values and social norms, a theme that has been repeatedly explored in science fiction books, movies, and series. 

AI could perpetuate existing biases or even introduce new forms of discrimination, as proposed in movies like 2001: A Space Odyssey (1968), Colossus: The Forbidden Project (1970), and I, Robot (2004, based on Isaac Asimov’s work), the episode “The Ultimate Computer” of Star Trek: The Original Series, the episode “The Measure of a Man” of Star Trek: The Next Generation, and the book Neuromancer by William Gibson (1984).

And what level of transparency or explainability can—or cannot—exist in the decision-making processes of AI systems? In fact, throughout history and even today, it has often been difficult to explain the reasons behind human actions and decisions.

Will these challenges be solved through even newer technologies or by enacting new laws? Probably not. And here another paradox arises: if we want AI to make decisions based on our ethical values, why aren’t we making those decisions ourselves? Have we already forgotten what it means to be human?

A fictitious dialogue about human reality in the age of AI

We don’t know where it happened, and it is not necessary to know it. Perhaps in a place belonging neither to the past nor the future. Perhaps in that timeless space where broken stories come together, not to be fixed, but to be mutually acknowledged. 
 

Four people, each marked by their time and circumstance, met briefly in a nameless room. There, a round table, shelves of untitled books, a mirror that doesn’t quite reflect, and a clock whose hands have long since stopped.
 

Gregor, Winston, Romney, and Solan were all there. None of them sought answers. None offered solutions. What they shared was something rarer, and more necessary in our time: consciously intentional presence.
 

Gregor, whose life had been interrupted by a metamorphosis that turned him into something his family could not—or would not—understand, spoke from within his silence. There was no resentment, only the memory of how others’ discomfort had sealed his isolation.
 

Winston, from a world watched and controlled by constant distortion of the truth, shared the weight of seeking authenticity in a place where even thoughts were betrayed. His was not a cry of rebellion, but a quiet yearning for something real, something untouched by manipulation.
 

Romney, deemed “obsolete” by a system that could no longer tolerate faith or individual thought, didn’t defend his beliefs—he inhabited them. His calm, lucid presence bore witness to the truth that human dignity requires no permission to exist.
 

Solan was the newest arrival. He came from another time—one not yet fully here. A time in which artificial intelligence has become deeply intimate, increasingly attuned to our emotions, patterns, and choices. 
 

He shared his concern: that in this hyper-personalized connection, we may lose what genuinely connects us to each other. That if we do not reclaim the art of deep listening, we risk being reflected endlessly by machines that cannot truly return our gaze.
 

And so, across memories and futures, no one interrupted. They listened. They recognized. And in that recognition, a different form of humanity emerged, the one that endures through shared presence.
 

On the table, an open book began to fill with words no one had written, but all had felt. They were not ideas or arguments, but traces of existence. The mirror, for a moment, reflected something—not a face, but a shared awareness. And the clock… the clock no longer marked the passage of time. It marked something else: the depth of the eternal moment.
 

At the end, without needing to decide who spoke, they all breathed a single truth, as if it were an ancient echo:
 

“And in presence, we are no longer insects or traitors or obsolete… In presence, we are… human.”
 

It is not the speed of our tools that defines us, but the depth of our relationships. Relearning how to be present and recognizing one another without judgment is the foundation of a new future because the future must begin with something as simple as truly listening to one another.

 

Beyond the Threshold: Rethinking the Universe, Rethinking Ourselves

At the intersection of advanced technology and contemporary philosophy, a new narrative is unfolding—one that redefines our understanding of the universe and of ourselves.
 

Shelly Palmer’s recent reflections on the rapid advancement of AI-powered humanoid robots invite us to question our identity and the role we play in an increasingly automated world. These technological developments not only promise to transform labor markets and domestic tasks, but they also challenge the very essence of what it means to be human.
 

The convergence of more powerful AI models, advanced dexterity, and multimodal learning is bringing robots out of factories and into everyday life. 
 

Projects like Google DeepMind’s Gemini Robotics, which integrates vision, language, and action, enable machines to perform complex physical tasks without extensive pre-programming. These robots can fold paper, unscrew bottle caps, and organize objects with remarkable precision. Collaborations with leading robotics companies underscore the objective of making robots more capable, more useful, and ultimately, more accessible to businesses and consumers alike.
 

This technological evolution resonates with philosophical and scientific ideas that question the nature of reality itself. The holographic universe hypothesis suggests that our three-dimensional perception may actually be a projection of information encoded on a two-dimensional boundary of the cosmos. This notion, along with the possibility that the universe could be contained within a black hole, forces us to reconsider the very structure of reality. 
 

Moreover, Nick Bostrom’s simulation hypothesis, which proposes that we may be living in a sophisticated simulation created by an advanced civilization, adds yet another layer of complexity to our quest for self-understanding.
 

Also, Otto Scharmer’s theory of the "emerging future" takes on new significance. Scharmer suggests that the future is not merely an extension of the past, but a constantly forming reality—one that we can influence through our awareness and present actions. The integration of humanoid robots into our society will not only transform the way we live and work, but it will also shape our collective evolution and redefine what it means to be human.
 

The convergence of these ideas suggests that a new understanding of the universe is inseparable from a new understanding of humanity. Technologies that challenge our perception of the cosmos simultaneously compel us to reexamine our identity and purpose. We are at a threshold where technology, philosophy, and science intertwine—pushing us to expand the boundaries of our comprehension and inviting us to take an active role in our shared future.
 

We are not merely witnessing a collection of disconnected trends. We are standing before a moment in history where the very definition of reality and humanity is shifting. The universe is no longer what we thought it was. We are no longer what we thought we were.
 

Therefore, our role is shifting from passive observers of technological change to active participants in shaping what “human” will mean in the 21st century.
 

It’s not a hallucination. It is a cosmic invitation to participate in shaping the reality that is coming into being. Will we answer that call?

 

View older posts »