header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future

16 years      OF Archives



The commentaries we share here are merely our thoughts and reflections at the time of their writing. They are never our final word about any topic, nor they necessarily guide our professional work. 


Our foolish abuse of new technologies threatens our very future

We live in a time of so much scientific and technological advance that we can now (almost) detect megastructures of extraterrestrial civilizations in our galaxy and that we can now (without the “almost”) digitally duplicate any person, living or dead, and interact with that person. But so much technology creates immense risks for the future of humanity (assuming there is still a future).

The German-British economist Ernst Schumacher (1911-1977) claims that humanity is in “mortal danger” “not because we lack scientific and technological knowledge, but because we tend to use it destructively, without wisdom.”

In other words, what is so helpful to us and opens up so many new opportunities for us is the same thing that we foolishly use to self-destruct. In other words, the mechanism that makes us intelligent is simultaneously the mechanism we use to deceive ourselves. When that happens, wisdom disappears, and only ignorant and arrogant ignorance remains.

By confusing “knowledge” with “wisdom” and, at the same time, confusing “knowledge” with “information” (“I already know, I saw it in a movie), all possibility of reconnecting with the source of wisdom disappears because, due to the aforementioned confusion, we will look for “wise” answers by increasing our knowledge, but without ever reaching wisdom.

Acquiring knowledge solves the problem of ignorance, but it does not solve our foolishness. It is possible to have acquired an impressive amount of knowledge and, at the same time, be impressively foolish to foolish. Wisdom is the antidote to foolishness. And the constant search for that wisdom (accepting that we will never find it in its fullness) is philosophy.

I agree with what the Spanish philosopher Carlos Javier González Serrano recently expressed when he said that “Never before has philosophy been so necessary to know.” But know what? González Serrano proposes that, at this moment, “knowing” can be understood as “thinking and acting with the manipulation” (emotional and psychological) to which we are subjected precisely by technology.

In this context, folly consists, paraphrasing González Serrano, in seeking a way to live “in an uninhabitable world.” Or, if I may, we seek to live peacefully in a world that we ourselves have made uninhabitable, a totally artificial world that we believe to be real, a technological Platonic cavern that manages and governs all our desires and our attention.

To quote the Spanish philosopher again, “infinite scrolling” has become the prevalent way of “existing” in the world. We think without questioning what we think, and we confuse “normal” (for us) with what is “real” and, even worse, with the only possible reality. Therefore, not even a pandemic can make us reflect on our lives, our culture, and our society.

What can you do then? Obviously, I don't know. I am not wise, and I never will be. I am a perpetual seeker of wisdom. Therefore, I dare to suggest that what we should do is talk with truly wise people (not “influencers”), regardless of what era they lived in and what tradition they belong to.

Do we sleep and dream to prepare for the future? It looks like it is

A new study published in the prestigious journal Nature on May 1 indicates that, during the first half of sleep, the brain “reboots” neuronal connections apparently with the purpose of preparing for the future or, more specifically, to be ready for learn what needs to be learned in the near future.

The study, led by scientists Anya Suppermpool and Jason Rihel, from University College London, suggests that “remodeling” of the brain during sleep allows new connections to emerge between brain cells the next day. In other words, during sleep the “strong connections” that brain cells have when awake are “deactivated” or “relaxed” (so to speak).

In short, according to researchers, it seems that sleep prepares the brain to “generate new connections the next day” (quoting Dr. Suppermpool), that is, (I add), the brain prepares itself for the future. which is neither continuity of the past nor repetition of the present.

It is worth mentioning that the study did not include experiments or observations of human brains, but, according to the aforementioned scientists, it is possible that these same brain patterns during sleep will eventually be observed in humans.

Be that as it may, the idea that we sleep and dream to better connect with the future is a fascinating idea. And this in turn connects with myths since myths are shared dreams and dreams are personal myths. If we accept this correlation, perhaps, on a global level, we should “reset” our culture and our collective “brain” in order to learn and access a new future.

But, unfortunately, both on a personal and social level, we live in a world that practically does not give us time to even breathe, much less think and much less reflect or meditate (which, in short, is "dismantling" the petrified connections between our thoughts and emotions). Therefore, over and over again we repeat the same thing expecting different results... until we no longer expect anything.

We stay awake watching movies or videos “to distract ourselves” or “to be able to sleep better” and, in this way, “we train the brain precisely not to sleep, rest, or disconnect. We don't give the brain time to disconnect from the past to reconnect with the future. Therefore, the next day, even if it is a new day and the future has arrived, we remain the same as before.

In scientific terms (used in the aforementioned article), by sleeping poorly or not at all, we remove “synaptic plasticity” from the brain. In fact, each of the neurons loses that “plasticity” that, by having it, would allow it to display new ways of understanding reality.

Perhaps the ancients knew something or, better yet, lived off of all this. After all, for them, dreaming, sleeping, having visions, and sharing myths (in the truest sense of the word) were practically a single activity, an activity focused on the future.

Paradoxically, perhaps we need to reconnect with that past in order to finally be able to reconnect wide awake with the new future.

Ignorance and pride prevent us from seeing the signs that the future sends us

Recently I witnessed (from a distance) an accident on a busy street north of the city where I live. It turns out that one lane was closed for construction, with signs, flashing arrows, and orange cones warning of this situation. But a driver ignored all these signs and, after suddenly braking, collided with the cones. Fortunately, there were no injuries.

The traffic was stopped for a few minutes and when it was my turn to slowly pass by the scene of the accident, I lowered the window of my car, and I could hear the driver of the accident vehicle say something like "I didn't know what those signs meant." In other words, for him, the signs, arrows, and cones had no meaning whatsoever, much less the meaning of changing lanes on time.

Another day, driving on a highway, a car sped past, ignoring both a construction zone and signs alerting the driver that he was speeding. In this case, the driver clearly knew what the maximum speed limit signs indicated, but simply chose not to obey them. Shortly after, the police stopped him to fine him.

These situations led me to reflect that many times (almost every time), when the future sends us signals, we quickly discard them, either because we do not understand them or because, although we understand them, we do not want to change our current behavior. And then we pay the consequences: we collide with reality, or something makes us pay the consequences of our actions.

But whether or not we pay attention to the signs that the future sends us, whether or not we understand those signs, or simply ignore them, the future constantly continues to send us signs, be it a sign, a luminous arrow, a thought, a phrase, some news, or whatever. Obviously, it does not matter how many signs we receive from the future if our ignorance and arrogance prevent us from seeing or understanding them.

In my experience, signs of the future are always unexpected, instantaneous, and fragmentary. They are flashes that appear and disappear quickly, mere indications of something new that is about to enter our consciousness. They are something like a fleeting glimpse of the reality adjacent to ours, which is already there (the future is always already there), but which we have not yet accessed.

As Heraclitus said two and a half millennia ago: “He who does not expect the unexpected will not find it” (Fragment 18). The “unexpected”, what “we do not find” in the present, what is difficult to discover and reach, the unexplored, that which leaves no traces that we can follow, is, ultimately, the future, which should not be confused with tomorrow or the future.

The future is the expansion of consciousness, and that expansion only occurs when the mind opens to new possibilities and opportunities, when the heart connects with those possibilities, and when the will activates them. For the closed mind (ignorance) and the closed heart (pride) there is no future.

It seems that in a short time AI will be able to think for itself. When will we humans think?

In a recent article (April 11, 2024), Joelle Pineau, the vice president of artificial intelligence (AI) research at Meta, stated that “we are working hard to find a way for (AI) to not only talk, but I can really reason, plan and remember.”

In fact, according to that same article, Meta and OpenIA were at that time “on the verge” of their new AI models “being able to reason and plan.”

Two weeks later, on April 26, 2024, OpenAI announced that its new version of ChatGPT can now “remember and plan,” although, perhaps out of modesty or prudence, there is no mention of ChtGPT 5 (or whatever it is called) can already truly reason.

In other words, in approximately a year and a half ChatGPT went from being just a novelty, almost just a toy (as the first phones and the first airplanes were considered), to transform itself into an artificial AI that speaks, remembers, and plans. One can speculate that ChatGPT also already reasons for itself or will soon do so.

Nevertheless, these new advances invite a closer, more detailed, and careful examination of the impact of the imminent arrival of artificial general intelligence (AGI), which, unlike current AI, will no longer be a purely reactive system, but rather a system capable of sophisticated cognitive processes. How sophisticated will they be? We will soon find out.

While all this happens, that is, while artificial AI learns to think and reason (what's next? Be self-aware?), we humans think less and less. And, as a consequence, we know less and less and, for this reason, it is increasingly easier for us to accept any type of misinformation, pseudo-theory, or funny video, while rejecting “reality.”

As the American philosopher Daniel Dennett (recently deceased) said in his memoir “I Was Thinking,” the real problem we face is not the arrival of the IAG or some other type of superintelligence. The existential threat that could even end civilization is turning AI into “a weapon for disinformation.”

The consequences of this situation, according to Dennett, will be devastating for our society because “we will not be able to know if we really know, we will not know who to trust and we will not know if we are well informed or misinformed.” Furthermore, “We could become paranoid and hyperskeptic, or simply apathetic and impassive. Both are extremely dangerous routes.”

Seeing what is seen and listening to what is heard in the so-called “media” and “social networks” (names reminiscent of the “Ministry of Truth” of Orwell’s 1984), what Dennett already warned us about it's happening. In a way, if that’s true, we are already all doomed.

If this were the case, our situation is remarkably similar (if not exactly the same) to that of the souls described at the beginning of Canto 3 of the Inferno, in Dante's Divine Comedy. These souls are in Hell and lost all hope after being condemned to misery for not thinking, having lost, and forgotten “the good of the intellect.”

What will emerge once all the new technologies now scattered are merged?

A few decades ago, looking at the telephone of that time, and then at the radio, the television, the camera, the video recorder, the maps, the flashlight, and many other artifacts I could never, not even in a moment of high imagination, anticipate that some One day all these devices would be merged into what we today call a smartphone.

But now, with that previous experience of seeing how a single device or artifact emerges from different technologies, now we can and must ask ourselves what will emerge once quantum computing, neurological computers, artificial intelligence, new forms of energy, robotics, and other advanced technologies merge into a single “reality.”

Everything points, first of all, to the arrival of an almost immortal synthetic human, with physical, mental, and cognitive capacities and abilities unthinkable and unthinkable for us, mere biological humans, mortal and certainly limited and finite.

In other words, just as the disparate elements mentioned above have merged into smartphones, so too will the disparate elements of new technologies (dare we suggest) also merge, but no longer into something so small that we can carry it in our hands. hand, but in something so big, possibly on a planetary level, that we will no longer be able to understand.

Certainly, I am not talking about science fiction or conspiracy theories, but about a careful and constant reading of scientific reports and articles, published by serious, respected, and verifiable sources, which would indicate that this process of emergence of new entities never seen before in the known history of humanity they are already emerging.

Again: it's not science fiction. The global network of supercomputers is already underway. Artificial intelligence capable of anticipating the actions of human beings (and even correcting them before they act) is already a reality. Prototypes of artificial brains have already been developed. Synthetic skin and muscles have been in development for years. And that list could be expanded almost indefinitely.

So, what is emerging? And another question: how prepared are we to respond to whatever emerges from the union of technologies that, as Arthur C. Clarke said, already seem indistinguishable from magic?

The arrival of synthetic humans and super-intelligent robots will mean coexisting with non-human intelligent entities (although not necessarily people). How will this unprecedented situation affect our brains, our hearts and even our decisions? I mean, we can barely live among ourselves, how are we going to interact with the new thinking beings?

But this new reality includes another perspective, that of “them.” How will synthetic humans and super-intelligent robots treat us? Because, although they are the result of our experiments, we will be able to do little and nothing to stop them if, as anticipated, in each of them all the technologies already available but still separated are merged.

And even if none of the above happens eventually, the exercise of thinking about it and anticipating it is valuable in itself because it serves as an exercise to prepare ourselves for a future we cannot anticipate. 

Closing ourselves to the present means excluding ourselves from the new future

I recently witnessed a situation in a local supermarket that exemplifies that mental, emotional, and psychological closure that, by keeping us locked in the present, prevents us from seeing the new future and, therefore, connecting with that future. That is, we consciously or not exclude ourselves from the emerging reality.

It turns out that a couple, already elderly and clearly newly arrived in the country, chose a package of meat, and then asked to speak to someone at the supermarket. A few minutes later, an employee who spoke the couple's language arrived. Then, the new immigrant told the employee that the meat was “badly cut.” For this reason, she added that she and her husband offered to “teach” how the meat should be cut.

In the minds of these people, the way they were accustomed to seeing meat cut, that is, the “normal” way, must be the only proper way to cut meat. Any other way of doing it was “wrong.” Even worse, anyone who did not cut the meat the way they (the newcomers) expected was, at best, an ignoramus who should be properly educated.

The supermarket employee, clearly understanding the psychological and cultural reasons that motivated the attitude of the couple in question, patiently explained to them that this is how meat was sold in that supermarket, that the cuts of meat were not bad, that the local butchers were not they needed to be educated, and that there were specialized butcher shops where they could buy the cuts they wanted.

This example reveals a highly prevalent psychological and existential attitude in our society in which clinging and sticking to the past (more specifically, the past that one knows and lived) is considered the best and, in many cases, the only strategy when encountering a reality. different from that past and for which one is not prepared.

Obviously, the example of a new immigrant couple shopping in a supermarket in their new country and expecting to find exactly what they saw in supermarkets in their home country is a superficial and irrelevant example. However, the attitude of intransigence toward the new future and the intense (and harmful) desire to perpetuate the past and repeat the present are not.

After all, just as this couple demanded that the meat be cut the way they wanted and considered any alternative “bad,” a similar attitude is seen in social, political, and religious groups who consider their “truth” (please note quotes) to be the only and authentic one and that any other option is something “bad” that must be eliminated or modified.

Intransigence can often be detrimental when it prevents individuals, organizations, or societies from adapting and embracing a better future.

Both the history of humanity and recent news in the media justify without a doubt that this is exactly what often happened and still happens in relationships between human beings. However, one thing is a trivial disagreement about “badly cut” meat, and another thing is a disagreement that endangers all humanity.

How can you think and act when science fiction becomes reality?

As a child, I liked to watch Star Trek, the landmark science fiction series that still continues to impact current culture and serves as inspiration for technological creations. However, I never anticipated, neither at that time nor in the near past, that Star Trek, far from being mere entertainment, was actually a documentary of the future.

I'm not exaggerating. On April 4, Arizona State University announced the start of a class titled "Star Trek and the Future of Humanity in Space," where they analyze the fields of science fiction and academic inquiry. to prepare students (and all of us) for the challenges of venturing beyond Earth.

The class will be taught by Dr. David Williams, a research professor in ASU's School of Earth and Space Exploration and a member of the science team for NASA's Psyche mission. Williams stated that Star Trek will serve as “a valuable mechanism for investigating the profound questions surrounding humanity's fate in space.”

In other words, the foundations of Starfleet Academy have already been established. And that's not all.

At the end of March 2024, it was announced that Microsoft has plans to build a $100 billion supercomputer, called “Stargate” (the name of another science fiction series). The new supercomputer would power OpenAI's next generation of artificial intelligence systems, according to a report by The Information.

Stargate is supposedly the fifth and final phase of Microsoft and OpenAI's plan to build several supercomputers across the United States. This supercomputer network is rumored to be one of the largest and most advanced data centers in the world.

As if that were not enough, a new big data processing system developed by researchers at the Chinese Academy of Sciences makes it possible to analyze large-scale neural activity throughout the brain in real time.

The FX System is based on a virtual reality system generated by an optical interface that extracts activity from more than 100,000 neurons through a brain-machine interface, thus allowing researchers to analyze neuronal activity throughout the brain in real time.

Thanks to this development, the artificial brain, perhaps similar to the positronic brain of the android Data from Star Trek, does not seem as distant or impossible as it seemed until very recently. But there is still more.

A study published last March in Scientific Reports (Nature) by Professor Takaya Arita and Associate Professor Reiji Suzuki of the Graduate School of Computer Science at Nagoya University (Japan) reveals the emergence of various AI personalities, similar to human personalities, marking a significant milestone in the field of AI.

So how can you think and act when science fiction becomes reality? Perhaps the first step should be to realize and accept that the future is no longer a continuation of the past or a perpetual repetition of the present. Perhaps we should also acknowledge that we are attached to patterns of thinking preventing us from connecting with the emerging future.

Whatever the case, what was previously thought to be entertaining science fiction, now it is real.

A very dangerous limiting narrative: the techno-determinist narrative

Searching for recent news related to the future, I came across the article “How Much of Our Humanity Are We Willing to Outsource to AI?” by Sage Cammers-Goodwin and Rosalie Waelen, (The Nation, March 27, 2024), where the authors question the passive acceptance of artificial intelligence (AI) and advanced AI systems such as AGI (Artificial General Intelligence).

Such uncritical acceptance of a technological future is known as a techno-determinist narrative, that is, a belief system that views technological progress as inevitable and inherently desirable, often overshadowing critical reflections on its broader implications for society.

The techno-determinist narrative, by suggesting that the only topic of debate is the ethical, social and existential implications of AI, prevents another deeper, urgent and necessary dialogue on how to regulate and optimize AI systems. In other words, the narrative of technological determinism is self-reinforcing.

This narrative, presented and accepted as the only possible alternative, subtly shapes our collective consciousness, fostering a mindset that passively accepts the trajectory of technological advancement without questioning its underlying assumptions or potential consequences.

By framing AI and AGI as inevitable forces of progress, we risk overlooking alternative futures and giving up agency in shaping the role of technology in our lives.

As Cammers-Goodwin and Waelen highlighted, this techno-determinist perspective urges us to reevaluate our priorities and values ​​as a society. Are we willing to sacrifice elements of our humanity for the sake of technological advancement? Is the relentless pursuit of efficiency and automation coming at the expense of human connection, creativity and meaning?

Furthermore, the authors question the wisdom of blindly accepting generative AI systems. While these systems may offer tantalizing promises of innovation and convenience, we must critically examine their implications for privacy, autonomy, and societal well-being. In fact, such systems could aggravate existing inequalities and erode the fabric of social cohesion.

To navigate this complex landscape, we must transcend the limitations of a techno-deterministic narrative and cultivate a wisdom, intelligence, and understanding that does not reduce reality and the future to just more and more technology, no matter how tempting it may be to delegate our lives. and our futures in AI.

Ultimately, the techno-determinist narrative presents both opportunities and challenges in shaping our future with AI. By interrogating its underlying assumptions and implications, we can chart a course toward a future where technology serves as a catalyst for human flourishing rather than a determinant of our destiny.

“We should not let the promise of productivity or narrow debates about AI’s ethical implications distract us from the bigger picture. Under the guise of improving humanity by increasing productivity, we risk releasing our ultimate replacement. We should not overestimate the durability of human skills”, Cammers-Goodwin and Waelen wrote.

It is time to answer the call, the call to critically evaluate and re-evaluate the impact of technology on our lives and actively participate in shaping a future that aligns with our values, aspirations, and collective well-being. Human life is too valuable for the human future to no longer be human.

Will we survive the threshold of 2030? Maybe yes, but we must prepare

For some reason, 2030 is presented as an interesting year in the history of humanity, a pivotal moment in which, apparently, we will cross a threshold into a new reality for which we are not prepared and which we can barely describe. And this is neither speculation nor science fiction, but just paying attention to recent advances in science and technology.

For example, in April 2030, NASA's Europa Clipper spacecraft will begin orbiting Jupiter (a 1.6 billion-miles journey from Earth), passing about 49 times near Europa, one of Jupiter's moons, to study through advanced instruments the possibility of life on that moon, because there is an ocean there.

According to Fabian Klenner, an astrobiologist and expert in planetary sciences at the University of Washington, it is anticipated that the space probe “will detect life forms similar to those on Earth,” either on Europa or on other moons with ocean orbiting Jupiter or Saturn.

For his part, the well-known futurist Ray Kurzweil recently declared that, as he had already anticipated in 1999, by 2029 artificial intelligence will reach a level of intelligence similar to that of humans "due to the exponential growth of technology."

And Kurzweil himself, who over the last 30 years correctly anticipated 86 percent of his predictions, stated two weeks ago that 2030 could be the year in which humans achieve immortality, thanks to a combination of advances in genetics. , nanotechnology and robotics that will allow not only to cure now incurable diseases, but also to rejuvenate people.

In addition, according to Nick Spencer and Hannah Waite, authors of the new book Playing God, extraterrestrial life, human immortality and truly intelligent (and perhaps conscious) artificial intelligence are joined by other possible irreversible advances, such as genetic engineering, and great challenges still unanswered, such as climate change and the destruction of the planet.

All these elements together “make us think” (and, I add, doubt) “about the nature and destiny of humanity,” say Spencer and Waite. And they are right. In a world where corruption reigns and authoritarianism and populism expand, where wars are endless, hunger grows and natural resources are reduced, how will we respond to the great challenges mentioned?

Will artificial general intelligence help us solve our problems? Will immortality give us more time to restore the planet? Will the discovery of extraterrestrial life be the inspiration for global cooperation? Maybe, but history is not in favor of positive results.

While these advancements offer opportunities for progress and innovation and have the potential to reshape how we approach global challenges, they also present ethical, social, and existential considerations that must be carefully navigated to ensure a positive impact on humanity and the world at large.

I believe there is a greater likelihood that artificial intelligence will exacerbate authoritarianism and social inequality, immortality will spark a desire to “reduce” the planet's population, and extraterrestrial life will trigger social unrest and global instability.

Will we survive the threshold of no return of 2030? Maybe so, but we must prepare.

Will we leave our decisions and our future in the hands of “silicon sages”?

The rapid advance of artificial intelligence (created by us, it is worth remembering), added to the constant evidence of our inability to live in harmony with the planet and with others, have motivated a growing number of people to insist that AI must make important decisions about our future and perhaps even govern our lives.

The new situation has been cataloged by Dr. John Vervaeke, neuroscientist and philosopher at the University of Toronto, as the arrival of the “silicon sage.” For his part, the Spanish scientific popularizer Ignacio Crespo describes the new trend as the arrival of the “binary augur” (an excellent description without a doubt).

Regardless of the name used, it is clear that in the face of our own evident inability as humans to solve our own problems, many people (how many people are unknown) assume that it would be better for AI to make the decisions. And, when it comes to political decisions, there are plenty of reasons and examples that indicate that it would be better for politicians not to decide.

But where are we humans? I mean: what good is it for us to be human if we can no longer or do not want to decide for ourselves? In other words, what have we become (or are we about to become) if we even have to delegate, or intend to delegate, our most important decisions to AI?

It seems that it is not enough for us that algorithms decide what we should buy online or what movie we should watch or what message on social media is or is not for us. It seems that it is not enough for us that AI monitors our emails or generates texts and images (almost) at the level of human creators. Now we want to leave our entire lives in the hands of AI.

This situation, this tendency, has little progress and much regression because it seems to grant the binary augur, the silicon sage, a level of wisdom and justice above any human being and, therefore, it is considered appropriate and even necessary to deposit all our trust (and bet our future) on the decisions made by AI, that is, our own creation.

Where then were the great traditions of wisdom that for millennia have been transmitted, written and rethought in almost all cultures around the world? I dare say that they were trapped (that is, devalued and distorted) within countless “videos” published on social networks, mostly by those who know nothing about these great traditions.

I'm not suggesting either going back in time or turning off AI. But, at the same time, I dislike the idea that humanity reaches the point of surrendering to its own creation, of abandoning all ability to remember, live and think. In fact, that situation terrifies me.

As Dante said in Canto 1 of Inferno, those who enter hell are those who forgot the benefits of the intellect, those who stopped thinking.

View older posts »