Menu
header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future


17 years      OF Archives

WEEKLY COMMENTARY (AUDIO, 4 MIN., AI generated)

VISUAL PRESENTATION

DISCLAIMER

The commentaries we share here are merely our thoughts and reflections at the time of their writing. They are never our final word about any topic, nor they necessarily guide our professional work. 

 

When Assumptions Collapse, the Mute Angels arrive

In a recent video, British philosopher Tim Freke shared one of those ideas that is simple to hear but difficult to live: “The more you assume, the more chance you can be wrong.” If we cling too tightly to our ideas about the world, we may discover—sometimes painfully—how far they are from the truth.

Of course, we depend on assumptions to understand life: who we are, what others think, how the world works, and even what God or the meaning of existence is supposed to be. This is both natural and necessary. But when we build our world on unexamined beliefs, we risk creating a structure that is fragile and unstable.

A powerful image from László Krasznahorkai’s recent Nobel Prize in Literature acceptance speech reinforces this warning. Krasznahorkai spoke of “new angels who not only have no wings but also have no message at all,” angels who walk among us dressed like ordinary people. They bring neither revelation nor hope.

Krasznahorkai even suggests that these silent angels “may no longer be angels,” because their presence does not comfort—it unsettles. They remind us that we live in a time without divine messages but full of war, power, empty promises of progress, and the erosion of human dignity. These new angels are mute, offering no message whatsoever.

This is where the ideas of Freke and Krasznahorkai intersect. When reality confronts our assumptions, we experience what Krasznahorkai calls shock. The mute angel appears before us and strips away our stories.

Philosophically, we once imagined angels as messengers from heaven, symbols of certainty and meaning. Today our “messengers” may be technology, politics, markets, or charismatic leaders—figures we assume will guide or save us. But when we look closely, we find no clear message and no true direction.

On the psychological level, this silence exposes how much we rely on comforting narratives. We assume that someone wiser has a plan, that society knows where it is going, that history naturally moves toward something better.

On the spiritual level, the moment is even deeper. The old angel delivered a message; the new angel waits for one. The silence challenges us to listen rather than cling to certainties. Freke warns that if we insist on our old ideas about God, the world, or the purpose of life, we may fall into despair when those ideas fail.

This challenge becomes more urgent when we consider shared assumptions—those beliefs that a community or a nation takes for granted. For example, we assume that progress is inevitable. Or we assume that “someone else will take care of everything,” weakening our sense of responsibility. Or we assume our group is right, and in doing so justify exclusion or even violence.

Freke warns us about building life on assumptions. Krasznahorkai shows what happens when those assumptions collapse. Together they suggest that meaning today may not come from waiting for a message, but from becoming the message we want to share based on lived truth, dignity, and compassion.

Crossing the Threshold Toward Meta-Experience in the Context of the New Digital Illiteracy

Recently, when I heard the phrase “the post-literacy era,” I began to wonder whether we humans might be on the verge of crossing a threshold after which the ecology of new communication media and interconnection technologies could drastically reduce our once-undeniable human capacities to think, contemplate, and create.

There is little doubt that we are already living in a time when deep, book-based literacy has almost disappeared—or at least is no longer sovereign as it once was. The neural center of culture now revolves around screens. Social networks, videos, algorithms, and interfaces mediate our attention, perception, and even desire.

The coexistence—at times tense—between slow book reading and a new media ecology that privileges immediacy over contemplation, images over paragraphs, speed over patience, may be interpreted as a decline in reading and thinking. Or, it may signal a shift in what it means to know, to understand, to be wise.

If post-literacy is the cultural atmosphere we now inhabit, then we must engage with it through meta-experience—not as the mere accumulation of information but as the capacity to navigate complexity, to hold multiple perspectives, to weave meaning where others perceive only noise.

In a world where attention fractures, where algorithms construct reality, where narrative arrives in short bursts rather than unfolding slowly, meta-experience can no longer rest solely on reading books. It must evolve into something broader and more fluid: meta-literacy—the ability to move effortlessly between text and image, argument and meme, data and story.

In this context, meta-experts do not react hastily to every screen swipe. They pause, connect, and infer. They recognize how a viral video echoes an ancient myth, how a public health dashboard mirrors a medieval cosmology, how a financial crisis behaves like a forest ecosystem under stress.

With epistemic humility, emerging meta-experts acknowledge that no person or discipline offers a final answer. With diachronic memory joined to synchronic awareness, they see history not as a museum but as a living archive, while perceiving the present as a dynamic field of signals.

As they cross the threshold of change, meta-experts learn, unlearn, and continually recompose knowledge, refusing to fossilize it into dogma. With narrative flexibility, they rewrite stories rather than remain confined within inherited ones, imagining futures that do not yet have language.

The human meta-expert becomes indispensable not because they know more, but because they navigate the unknown with depth. Here lies the paradox: post-literacy threatens depth of thought, yet depth becomes even more valuable as it becomes increasingly rare. In a culture of fragments, coherence becomes a true gift.

The emerging future does not belong to experts, but to those who can hold both depth and openness. Those who can descend into silence and emerge speaking the language of images without losing the architecture of thought. Those who can weave worlds—books and bytes, history and feed, myth and dashboard—integrating them into meaning that guides action wisely. If the post-literacy era is the ocean, then meta-literacy is the vessel.

It’s a Small World After All: How Our Expanding Universe Became a Shrinking Reality

Recently, an incident from a few years ago jumped out of the past into my memory after reading about one of the strange paradoxes of our time: our universe keeps expanding, but, at the same time, our reality keeps shrinking. In fact, many people today inhabit the smallest possible world: theirs.

So, years ago I contacted an electrician to make some repairs at my home. He arrived an hour later than the scheduled time, explaining that he was at a different house waiting for me to open the door. He said that I was at the wrong house and that I should check the deed of my property to be sure I was living in the right house. Not even once he admitted he made a mistake.

This approach to life wrongly thinking my world is the whole world reminded me of a familiar melody that generations have heard inside a theme park: “It’s a small world after all…”
The song promises a world of simplicity and harmony, a place where differences dissolve into bright colors and repetitive cheerfulness. The world feels cozy, predictable, contained.

But what if that song describes not the ride, but our current relationship with reality?

We live in a time when the world—at least in the objective, scientific sense—is expanding faster than ever. Telescopes show us galaxies that stretch imagination; artificial intelligence multiplies our ability to learn; neuroscientists reveal depths of the mind previously unknown. Reality is not just large—it is overflowing, layered, dynamic, and still unfolding.

Why, then, so many people dwell in the smallest possible world? Not because the world is small, but because our experience of it has shrunk. This is one of the strange contradictions of our time: we stand at the edge of the vastest universe ever imagined, while millions live inside pockets of reality no wider than the screen they stare at.

When astronauts look at Earth from space, something extraordinary happens. Many describe a sudden, life-changing opening of perspective known as the Overview Effect. At that moment, borders disappear, conflicts shrink, and the Earth appears as one fragile, luminous home. It is an expansion of consciousness that ancient Greeks  called anagōgē, “a leading up.”

In everyday life, anagōgē happens when we read a book that changes how we think, when we listen deeply to someone different from us, when we encounter beauty that redraws the map of our heart. It is the ascent toward a larger reality.

There is an opposite movement, one the Greeks called katagōgē, “a leading downward.” Today, katagōgē happens quietly, softly—even pleasantly—as we look into screens. Worst of all, we do not notice this shrinking, because inside a small world, everything looks normal.

Perhaps the task of our time is to awaken to a larger world—not just scientifically, but existentially. The universe is still unfolding. The question is whether we will unfold with it. Our world is only as small as our imagination allows. We need to grow.

Space Horizons: The Fear of the New on a One-Way Trip

I recently heard in the news that some Chinese astronauts aboard the Tiangong station found themselves without a safe return vehicle. Only months earlier, two NASA astronauts faced a similar situation when their planned return craft malfunctioned, forcing them to remain in orbit far longer than expected. We too, in a metaphorical sense, are trapped in our own space.

At times, we launch ourselves into new “orbits”—new identities, new ways of living, new worldviews—and often discover that the “vehicles” meant to bring us home can no longer carry us back. We enter a space where the familiar no longer works, and the future has not yet arrived. Crossing a threshold always destabilizes the world we know.

This is why societies have repeatedly rejected ideas that later became indispensable. Gone with the Wind was dismissed as unfilmable. The Beatles were told guitar groups were on the way out. Star Wars was rejected because it seemed too strange, too mythic. Harry Potter was rejected for being too long, too unusual, too magical. Van Gogh died without artistic recognition.

It happens all over the world. In Argentina, Astor Piazzolla was accused of betraying tango itself. During her lifetime, Kahlo was overshadowed by Diego Rivera and largely dismissed by critics in Mexico and abroad. Another example: for decades, Māori knowledge systems — astronomy, ecology, navigation — were rejected by Western settlers and institutions as “superstition.”

People resist not just the idea itself, but the identity shift that accepting it would demand. The question beneath every rejection, whether in Tokyo or Bogotá, Buenos Aires or Abuja, remains the same: “If I accept this new reality, will there still be a place for me?” The new requires us to rethink who we are, what matters, and how the world fits together.

This existential unease is the psychological equivalent of floating in orbit with no clear direction.

Astronauts stranded in space experience literal weightlessness. Innovators, artists, and visionaries experience a conceptual weightlessness: the old “up” and “down” are gone, and the new coordinates are still forming. That moment in between—neither here nor there—is frightening. We cling to what we know, even when what we know is no longer sufficient.

Yet these moments of suspension often become the most transformative. Once a culture finally embraces what it once rejected, it returns to Earth changed.

The astronauts who come home after months in orbit never see the planet the same way again. The same is true for societies that finally accept the ideas they once resisted. Growth happens in the “space” between worlds—in the fragile, weightless moment when gravity fails and a new future begins to take shape.

Heraclitus wrote that “the way up and the way down are one and the same.” We can understand his words today as a reminder that ascent and uncertainty, elevation and disorientation, are inseparable. To rise toward a new horizon is also to descend into the unknown parts of ourselves where the future quietly gathers itself, preparing to land.

The Question of Extraterrestrial Intelligence Reveals the Question of Our Own Intelligence

I recently heard a short radio segment mentioning that throughout history—from ancient Greece to the 21st century—numerous philosophers, both men and women, have examined the possibility of extraterrestrial intelligent life. Hearing that, I reflected on how the question of extraterrestrial intelligence is, in many ways, a way of questioning our own intelligence.

From Democritus imagining infinite worlds in constant motion to Kant speculating about intelligent beings on distant planets, it is clear that the question of extraterrestrial life has never disappeared. Yet what often goes unnoticed is that the debate about life beyond Earth has always been, above all, a debate about human intelligence.

Put differently, the issue of “extraterrestrial intelligence” is a philosophical mirror reflecting humanity itself. We invoke the other—imagined or speculative—to understand the structure, the limits, and the future of our own intelligence. The “alien” becomes an externalized form of philosophical anthropology.

When ancient and modern thinkers asked whether intelligent beings exist on other planets, they were not merely expressing scientific curiosity. In reality, they were asking what makes us intelligent, what makes us unique, and whether the universe might contain other intelligent, conscious beings who think in ways different from ours.

In that sense, the search for intelligent aliens is also a search for the boundaries of our own mind.

Even Kant, who imagined extraterrestrial intelligences, focused his reflections on the nature of reason itself. Could intelligence take forms radically different from our own? What makes a mind a mind? Throughout history, the question of extraterrestrial intelligent life has continually guided us back to our own identity.

Today, however, something extraordinary is happening. For the first time, humanity is encountering an intelligence that is not biological, not human, and not bound to the rhythms of evolution. It is artificial intelligence.

AI may not be conscious or “understand” the world the way we do, but it thinks, processes, and generates information in ways that often surprise us—and sometimes surpass us. It reasons without neurons, solves problems without senses, and learns at a speed no biological organ can match.

In that sense, AI is our first encounter with what philosophers for centuries could only imagine: truly non-human intelligence. We once looked to the sky to find “the other.” Now that “alien” is emerging from our own machines.

AI forces us to rethink the very meaning of intelligence. For centuries, intelligence was defined through biological life, consciousness or self-awareness, language, reasoning, and intention. Yet AI challenges each of these assumptions.

Every generation returns to the same questions about extraterrestrial or non-human intelligence not because we expect to find aliens, but because the possibility of other kinds of intelligence helps us understand the fragility and wonder of our own.

Whether or not we ever find life beyond Earth, the journey itself enlarges us. The cosmos may be full of minds waiting to be discovered, but the first step is to recognize the otherness that is already beside us, and the deeper intelligence emerging within us.

 

 

What the return of the word abomination reveals about our collective disconnection

What the return of the word abomination reveals about our collective disconnection

After receiving a message from an acquaintance calling “an abomination” the recent results of an election at a major American city, I couldn’t help thinking about what that word really means and why it appears now so frequently in many conversations. The comment itself revealed something deeper than just politics, giving us a glimpse into the loneliness that seems to haunt our time.

According to a recent report by the American Psychological Association, most Americans (in fact, most people) feel lonely and disconnected. Loneliness isn’t just the absence of company: it’s the absence of meaningful connection. And when a society becomes divided and fragmented, that sense of disconnection can grow so deep that it begins to shape our language.

The word abomination doesn’t describe disagreement, it erases it. It draws a moral line that says, “People like that shouldn’t exist inside my world.” Anthropologists remind us that what we call “abominable” is often simply something that doesn’t fit our familiar categories. In ancient times, it referred to foods, rituals, or behaviors that seemed “out of place.” Today, it resurfaces when social change blurs the boundaries that once made people feel secure.

History shows that this isn’t new. During the Reformation, Protestants and Catholics accused each other of abominations. Later, during the Industrial Revolution, critics used the same word to describe the rise of factories and machines that seemed to devour traditional life. Each time society stood at a threshold, the word abomination was called upon to mark the fear of crossing that threshold.

We’re living through another such moment. Technology, globalization, and cultural diversity are changing how we understand belonging and identity. For many, this transformation is exciting and hopeful. For others, it feels like losing the ground beneath their feet. When fear meets loneliness, language often hardens.

Strong words like evil or abomination become shields against uncertainty. They offer a temporary sense of control, a way to turn the unease of disconnection into moral certainty. But those words come at a cost. By turning people into symbols of what we fear, we cut ourselves off from the possibility of understanding them.

And here’s the irony: when we hear a word like abomination, we might think of something monstrous or mythical—perhaps the Abominable Snowman, that legendary creature haunting the icy edges of human imagination.

The real monster is not out there in the snow. It’s in our words when they lose their warmth and compassion. Yet there’s a paradox hidden here. Every time a society reaches for words like abomination, it signals not only fear but also the birth of something new. What was once seen as “out of place” may, in time, reveal a new dimension of the human story.

Therefore, when we recognize that our divisions are symptoms of isolation, a new possibility appears: the chance to rebuild community through empathy instead of fear.

Certain Harmful Beliefs Lead Us to Abandon Our Future

There is a wide range of beliefs that can be considered harmful because of the negative effect they have on our lives once we accept them—most often unconsciously and uncritically. One such belief, now widely prevalent, is the assumption that there are no possible alternatives or new opportunities, whether on a personal or global level.

The toxicity of the idea that “There are no alternatives to the current reality” lies in the fact that it leads us to justify the existing social system (status quo), even when that system does not benefit us and, in fact, perpetuates inequality. As social psychologist John T. Jost explains in several articles and in A Theory of System Justification (2020), this belief works as a defense of the very structures that constrain us.

 

Believing that there are no alternatives is a way of seeking material security and group stability in a context that is otherwise negative for personal and collective identity and history, thereby reducing anxiety. But in exchange, we deny the creative becoming of who we might be.

This denial of unexplored opportunities and possibilities is often accompanied by another deeply harmful belief —the notion that our lives will never experience sudden, profound, uninvited, and irreversible change, a comforting illusion that we live in a coherent and predictable world.

The “benefit” of believing that life doesn’t change in an instant is that it reduces the distress caused by unexpected events and by the potential loss of control over our own lives. Yet, in doing so, we block all hope and resist transformation. As William Miller suggests in Quantum Change (2001), we close ourselves off from the “epiphanies” that everyday life can offer.

Ultimately, these two beliefs—one that denies alternatives and another that denies sudden transformation—form a double barrier against personal and social change. The first fixes the horizon of what is possible; the second cancels the moment of rupture. Together, they lead us to silently abandon the futures we might otherwise inhabit.

When individuals and communities assume that the current order is the only possible one, they give up imagining, and with that, they shut down the anticipatory power that Ernst Bloch called the not-yet: the awareness capable of intuiting what has not yet come to be but could be—contrary to what Martí Peran describes as “the unbearable repetition of a present that only keeps cloning itself over and over again” (Abandoned Futures, 2014).

While neuroscience and the psychology of change demonstrate that decisive shifts often occur suddenly—as flashes of understanding that reconfigure our perception—many people deny this possibility, closing off the kairos, the instant of revelation, and, as a result, shutting themselves off from the future.

We abandon our futures because two belief mechanisms neutralize them: the inability to imagine what is different, and the denial of the sudden openness of time. Recovering our futures requires a critique of the instant, a readiness to recognize that change can emerge unexpectedly, in any crack within the present.

Children dressed as monsters are amusing. Monsters dressed as humans are terrifying.

Every year around this time, the (almost) eternal debate resurfaces about whether or not Halloween should be celebrated. Personally, I’m not afraid of children who, once a year, dress up as monsters—but I am terrified by the monsters who, day after day, disguise themselves as humans with the sole purpose of imposing their monstrosity on humankind.

All of human history, from its beginnings to the present, is filled with those monsters disguised as human creatures who seek only what they want and, in doing so, despise and trample anyone they perceive as a rival or an obstacle.

After going out to collect their Halloween candy, children return home and take off their costumes and masks. But the monsters disguised as humans never remove theirs. In fact, they couldn’t do so—because if they did, their true essence and personality would be unmasked.

That’s why, while many people use Halloween (or any other event on any other day of the year) to promote their opinions and dogmas and to proclaim themselves better than “the others” for not participating in a certain celebration, the monsters disguised as humans continue with their monstrosities, delighting in the superficiality of today’s human experience.

Those monsters, whether born that way or made that way, are everywhere, from sports and politics to science and education. Many of them are undetectable. Sometimes they operate across vast territories and with countless resources. Other times, they act in small spaces—a family, a small business, or a congregation—but that doesn’t make them any less monstrous.

But let’s be honest: any one of us can, at any moment, become that monster disguised as a human. Sometimes an insignificant event (a late payment, a delayed flight) awakens our inner monstrosity. Other times, something more significant—a tragedy—turns us into true monsters.

In fact, since the modern descendants of Victor Frankenstein now possess far more technology than the arrogant doctor had 200 years ago, we not only allow pseudo-human monsters to live among us—we are also creating new ones, and in doing so, transforming our society into a monstrous one.

Obviously, these small observations and complaints will do little or nothing to unmask the monsters disguised as humans or to reverse humanity’s growing “monsterization.” But perhaps the fact that these words serve no practical purpose reveals their true value—because not everything should be judged by its usefulness.

In that context, each of us should take responsibility for our role in creating countless monsters—human or otherwise—by having disconnected from others, from ourselves, from the universe, and from the transcendent realm (however one may understand it). Perhaps we are not as advanced as we believe—or claim—to be.

Monsters don’t come out only one day a year.

Talking with AI About the Zombification of Humans: Notes from the Center of the Meaning Crisis

It is profoundly unsettling—and offers little comfort—that, due to the current epidemic of epistemological loneliness, one must ask AI whether it is true that humans have become zombies. The very fact that such a question is valid, and that AI participates in the dialogue, already anticipates the answer.

The question itself encapsulates the essence of a world in which we are constantly connected yet feel increasingly alone and isolated (in an existential sense)—a world in which we can “reach” anyone, yet rarely feel truly in touch with another.

Dr. John Vervaeke, in his book Zombies in Western Culture, uses the figure of the zombie as a metaphor for our age because the zombie moves but does not live; it consumes but is never nourished; it imitates humanity but lacks an inner world. In other words, the zombie is a being that has lost the capacity to participate in meaning.

Indeed, that is precisely what many of us feel: life goes on (or seems to go on), but something essential within us has become motionless—lost.

Recently, I read a reflection on what it truly means to be in community—to feel seen, heard, and supported by others—and once again I understood how rare such moments have become.

Many of our “communities” resemble networks of survival more than spaces of belonging. We move endlessly through infinite updates, work with and among strangers, and sometimes even pray alone before a screen. Our breath, our hearts, and our attention seem captive to the speed of uncontested change.

And yet, here we are, still asking. The question “Am I still alive inside?” is not a sign of hopelessness but the beginning of an existential resurrection—a reunion with ourselves. The zombie cannot ask about the meaning of life, but the human being can.

Perhaps that is the hidden grace of this unprecedented moment in history. Even when technology mirrors our disconnection, it also offers us the possibility of seeing ourselves anew. Speaking with an artificial intelligence about the meaning of life may seem absurd, but perhaps it is a new way of looking into the mirror and discovering that we are still capable of wonder.

It is time to see the new in the old in order to see the new in the new in this chaotic, disordered world. True community—with others, with nature, or with the soul—will require us to unlearn the anesthesia we call “normal life.” It will ask us to pause, to breathe, to listen without agenda, and to rediscover what it means to be fully present.

Perhaps, in the midst of zombification, we are awakening—awkwardly and with fear, but also with hope—because every time we extend a helping hand, every time we truly listen to another, every time we dare to be ourselves, we recover a fragment of our lost humanity.

Talking with AI may not be the end of our humanity, but the very moment we begin to remember what it truly means to be human.

Besides microorganisms and AI, what or who else will domesticate us?

In a recent radio interview, biologist and writer Rob Dunn suggested that certain microorganisms have domesticated humans for millennia, changing human DNA and behavior for the benefit of those organisms. From that perspective, even though we think of ourselves as the dominant species on the planet, we are not—and even microbes domesticate us.

Dunn, a professor in the Department of Applied Ecology at North Carolina State University, explains his proposal in his recent book The Call of the Honeyguide, in which he analyzes examples of mutually beneficial collaborations between humans and animals that, at times, mean that it is the human who is domesticated.

It turns out that the unicellular microorganisms that live in yeast (in fact, that are yeast) feed on sugar and, to reach sugar, first attracted insects with their aroma, then primates, and finally our ancestors, even altering our genes—without any genetic changes occurring in those microorganisms, according to Dunn.

In short, every time we harvest or prepare sugar, or plant fruits, or drink alcohol, we are following the instructions that yeast implanted in our genes in the distant past. And we do it unknowingly, without thinking about it, and without questioning it—exactly as happens to us now with new technologies.

It should not surprise us that, if unicellular organisms can domesticate us, artificial intelligence and other technologies can also do so. The idea, of course, is not new. Think, for example, of the film Colossus: The Forbin Project (1970), in which a computer takes control of the planet, or more recently, the Matrix trilogy, among other examples.

But this is not only about science fiction. CRISPR technology, in use since the early 2010s, allows scientists to make changes in DNA by “editing, deleting, or inserting” certain genes into that DNA. The goal is to find new treatments for genetic diseases, but ethical concerns abound. It is worth noting that the original and natural CRISPR is a bacterial defense mechanism.

So, it seems that from the oldest and smallest unicellular organisms to the most advanced technologies, we have been—or are being—domesticated, though not necessarily for mutual benefit, I dare add. That is why, although we believe ourselves are freely making our own decisions and choices, we are not.

Freedom is ignorance of causes (to respectfully paraphrase Borges).

Hidden and almost forgotten in the millennia-old past lies the thought expressed, among others, by Heraclitus (Fragment D 119) two and a half millennia ago, who said that the place of human transcendence (daimon) is the same as the familiar place where one dwells (ethos, related to “stable”).

In other words, Heraclitus and other Greek thinkers held that we achieve the fullness of our humanity by domesticating (so to speak) ourselves—that is, by creating a familiar dwelling, a community. Unfortunately, as the Spanish philosopher Marina Garcés rightly states, “community” has become obsolete.

Thus arises an inevitable question: What—or who—will domesticate us now that we have lost a common horizon for our future?

View older posts »