header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future

15 years  of Archives


There are currently no blog comments.


At what point does technology become magic? When we become irrational

Although only a few decades have really passed, I remember those now prehistoric and obsolete times when I first saw a color television, used a photocopier, enjoyed a hologram, and held a cell phone in my hand (which was the size and the weight of a brick).

But none of those advances, which at the time seemed wonderful and insurmountable to me, can be compared with current technology that, due to its achievements (read: artificial intelligence) is getting dangerously close to magic.

Arthur C. Clarke had already warned that any sufficiently advanced technology would be indistinguishable from magic. In other words, at some point in technological progress, technology begins to be perceived as a mystical and supernatural force that generates inexplicable phenomena.

Or, to put it another way, technological advances blur the lines between advanced technology and magic, causing those usually neatly separated concepts to intersect in such a way as to become nearly indistinguishable.

Consider, for example, the possibility of an imminent artificial intelligence superior to human intelligence. Or the ability to edit (manipulate) our DNA. Or the fact that, due to technology, humanity is evolving into four different species of human beings, including synthetic humans and digital humans.

At some point, machines (for example, airplanes) achieved what was previously considered impossible. The same is now happening with artificial intelligence and quantum computers. And, in many cases, that fact of achieving the impossible generates exactly the same "mystical effect" that previously generated spells, incantations or potions.

In this context, we must not forget that some of the early modern inventions, such as telephones and the photographic camera, were initially created to communicate with the other world and to document it visually, respectively.

Be that as it may, the arrival of virtual reality (increasingly real than reality itself) takes the intersection of technology and magic to a new level, because now we can interact with a "parallel world", which does not exist in the physical realm. , but that is not why it is less real. Within that "magical world" we can be what we want and be where we want.

There is no doubt, then, that although technology and magic are still separate concepts at this time, their growing interconnectedness makes it increasingly difficult to separate one from the other. Is ChatGPT a new oracle for our time? Will we have a magic wand that cures everything?

But where does this “magical thought” come from? As the Argentine historian Ariel Petruccelli explained in a recent essay, we are in an irrational age and that is dangerous:

"In our scientific-technical world, magical thinking is a guarantee of subordination to those who dominate science and technology, thus creating a high risk of disaster if those who control the techno-scientific complex allow themselves to be won over by magical or irrational beliefs," says Petruccelli, citing the example of well-known tyrannies and devastating wars in the first half of the last century.

Godlike technology, “magical” beliefs, irrational thinking. What is happening to us?


We no longer know or cannot distinguish the true from the false

I remember reading some time ago the story about Dolly Parton entering one day a Dolly Parton look-alike contest. She lost. In fact, they told her that she didn't imitate Dolly Parton well enough and gave the award to a man dressed like her. That story illustrates a key element of our times: we prefer imitation to the original.

This topic is obviously not new. It could be said that it is as old as our civilization and as humanity itself. Human beings, it seems, have always had the desire to separate reality from illusion, the real from the imaginary, knowledge from opinion, and the actual from the fictional. But rarely have we been successful.

Repeating and paraphrasing an ancient teaching that already appears in the Talmud, the French novelist Annais Nin expressed the last century that "We do not see things as they are, but as we are." Or, if you prefer, as Ramón de Campoamor said, “Everything depends on the color of the glass you are looking through”.

It could be said, following Campoamor, that this disappearance of the line between reality and illusion is only possible "in this treacherous world", that is, in a world in which "nothing is true or false", thus anticipating in the 19th century the post-truth era of the 21st century.

In this context, one cannot fail to mention the tango Cambalache (Secondhand Store), written by Enrique Santos Discépolo in 1934, where he correctly affirms that we live in the era when "Everything is the same, nothing is better / The same an ignoramus as a great teacher."

In other words, we live in a time when it is not the case that the truth has disappeared, but that the truth (and, therefore, the lie, because they go together) has become irrelevant.

So, not only do we see things as we are, we not only accept that nothing is true or false, and we not only believe that everything is the same and nothing is better, but we are not interested in how things truly are. We have become arrogantly ignorant, that is, we know we are ignorant, but we don't care. In fact, we believe we have the right to be.

In this situation, the appearance, the shadow, the illusion, presented uninterruptedly before our eyes on any screen to which we have access and repeated ad nauseam in each post and message on social networks, becomes our reality, because, as Carl said Jung, reality is that from which we cannot be separated.

Even more, Jung maintained that we cannot heal what we cannot separate from, thus inviting us to distance ourselves from what happens to us and our response, reviving a teaching already taught by the Stoics: our “humanity” resides between our reactions and our responses.

Two millennia ago, some rabbis claimed that the only difference between heaven and hell is our own attitude. Perhaps, then, our mind and heart have become so ossified that that is why we live in hell.

In this hyper-connected age, we are more isolated than ever

South Korean philosopher Byun-Chul Han affirms that we live in a time in which we exploit ourselves and, moreover, we do it with pleasure. Therefore, we live continuously stressed, overwhelmed, and exhausted. A recent study, prepared by, adds details to that observation.

According to that report, we live in a society where we have lost the ability to "actively listen" to others. That is, we listen just to download information, or to try to win an argument, or to wait for the other to shut up and say what we want to say. But we do not listen to understand, much less generate a creative dialogue.

No matter what or who can be chosen to hold accountable for that situation (the pandemic, technology, social media, our lifestyle, culture), the results of that miscommunication are clear: we live in social isolation, without relationships. positive, with no one to teach us, with constant socio-economic obstacles and not being able to solve our own problems.

Social isolation, the report says, “erodes our ability to grow” because it reduces our opportunities to connect with others in a positive way. In fact, for many people, being with their family, friends and colleagues is a "major problem" and a "great pain" because neither one knows what to say nor does one listen to each other.

In addition, contrary to what happened with generations of the past (but not so far), there are no longer older people or people with more experience who serve as advisors or mentors to respond to life's challenges. In fact, they are just as overwhelmed as all of us and it often seems “absolutely impossible” to offer help or advice.

Simultaneously, no matter how much income a person has or how good and stable his job is, everyone knows that such a situation can disappear at any time due to the volatility of today's world. A pandemic, a war, an attack or a natural disaster, and everyone, rich and poor, can suddenly find themselves with nothing.

For many people, the “intense emotional pressure” caused by just thinking about these unpleasant possibilities is enough to stop them from being effective and healthy in their relationships at home, at work, and among their social contacts.

When all of these come together in a person's mind and heart, the result is a loss or diminished ability to analyze and solve problems. In other words, although almost 9 out of 10 people affirm that knowing how to solve their own problems is the "main skill" one can and should develop, few have that ability. Yet, many people can no longer solve their personal or work problems without help.

As Han said, “Today we search for more information without gaining any real knowledge. We communicate constantly without participating in a community. We save masses of data without keeping track of our memories. We accumulate friends and followers without encountering other people. This is how information develops a form of life that has no stability or duration.”

We only have a decade left before we lose our decision-making capacity

News about the new future breaks so fast that it feels like you're speeding through one sci-fi movie after another. But it is neither a movie nor fiction, rather it is a new reality that we still do not fully see or understand, but that already affects us.

For example, Australian soldiers can already control robot soldiers with their minds, that is, just by thinking about it. And Austrian and Spanish scientists discovered how to travel from the future to the past to "rejuvenate" atoms. Additionally, digital humans are anticipated to be the employees of the future. And AI can already fly fighter jets without human help.

As if all this were not enough, a new report recently published by the Pew Research Center, based on interviews with renowned scientists, indicates that the majority (56 percent) of these experts believe that by 2035 AI will be so intelligent that it will be practically impossible for human beings to make decisions on their own.

Technological telepathy. Time travel. Digital humans. Autonomous weapons of war. AI with natural language. Phygital world (physical-digital). Direct and bidirectional communication with animals. Counseling for mixed couples (one human and one robot). The list is endless.

Meanwhile, we wonder what will happen to the bad guy in the novel, who will win the next soccer game and what we will eat tonight. In a few years (before 2035), we won't even ask ourselves those questions because the AI will decide on novels, football matches, meals, and everything else related to human life.

Obviously, successive AI generations evolve much faster than successive human generations. Therefore, it is not unthinkable that in the near future artificial AI will learn to travel from the future to the past to correct a situation from the past, as the aforementioned European scientists already do. Like “Terminator”, but in real (very real) life. 

But before anyone says that Terminator is just a movie, last January scientists from the Chinese University in Hong Kong announced the creation of a (small, for now) robot that can change from solid to liquid and back again. solid using electromagnetism. In fact, that robot can easily walk through the bars of a cell.

Our blindness to the new reality can only be compared to living inside a cave, as Plato taught, perpetuating the self-delusion of believing that what we see is all reality and the only reality. But whereas in Plato's allegory the cave dwellers were prisoners, in our time the "cave dwellers" are arrogantly ignorant.

The question then arises: what will be left of humanity when there is nothing left of humanity? How will we remember what we once were and how will we know what we could become if we don’t ask questions?

The more time we spend dodging those questions, the better the chances that we stop asking them and the greater the possibility we will stop thinking altogether. According to the Pew Research Center’s report, AI (ChatGPT) already knows this. We, unfortunately, do not.

How long can we assume things and delude ourselves without accepting reality?

I recently needed the services of an electrician and when I called him, he told me he would come the next day at 10 am. More than half an hour later, the man called me saying that during all this time he had been standing in front of my house and that he had knocked on the door and rang the bell, but that I had not answered him.

I told him that he was probably going in the wrong direction, but he didn't believe me. So I went out to see what was happening and saw the electrician several houses away, talking to a neighbor who surely told him he was in the wrong house. I waved him over to me and the electrician finally came to my house to make the necessary repairs.

But before starting he congratulated me on being Russian and asked what part of Russia I was from. I told him that I am not Russian, nor do I come from Russia, nor do I speak Russian. Again, he did not believe me.

At that moment, I was already thinking of asking him to leave because he first went to the wrong house and then he did not get that I was not (I am not) Russian. But there was still one more topic that, I must say, took me by surprise. The electrician congratulated me on my retirement. In fact, he asked me what I was doing now that I no longer worked.

I asked him why he had asked me that question and he told me that since it was a weekday in the middle of the morning and I was at home, that meant that I was no longer working and that I had retired. I explained to him that I was at the house because  the electrical problem was in the house.

After that exchange, my patience was already beginning to waver. And that patience ended when the electrician told me that my name was misspelled (as if I didn't know how to write it) and even suggested that I check the documentation of the house to see if I was really living at the correct address. So, I did ask him to leave.

The next day another electrician came and in a matter of minutes he detected the problem and in less than an hour he had solved it, without questioning my nationality, ethnicity, address, name or employment status.

But the first electrician is a remarkable example of the attitude of many people who are so clinging to the belief that they are always right that they do not change that belief even when shown clear evidence to the contrary. It's like that man who was driving the wrong way down a busy avenue insulting other drivers for not going in the right direction.

Clinging to our beliefs easily leads to self-deception and, therefore, disconnecting from reality. And it all starts when we convince ourselves that we are always right.




The impossible is only impossible for those who believe so

A little over a decade ago, in the context of a philosophy class focused on technological change, I indicated to my students that quantum computing would arrive in a short time and, with its arrival, would cause a revolution in our lives. I remember the incident from the reaction of the students: they began to laugh without being able to contain themselves.

When she finally got control of herself, one of the students said something to the effect of “That's never going to happen” and, if I remember correctly, even suggested that I should stop reading or watching so much science fiction. (The source for my class was research published at the time by Washington State University experts, not a science fiction story.)

Beyond the incredulity of those students about the development of quantum computing, the truth is that not only has that future already arrived, but also that we have already overcome it or are about to do so.

A recent report led by Dr. Winfried Hensinger, a professor at the University of Sussex, in the United Kingdom, indicates that his research has allowed progress in the creation of multi-task quantum computers. Those computers are just as advanced over today's quantum computers as the quantum computers we use are.

In fact, Hensinger said, the new era of "quantum super computers" means having computers so "extremely powerful" that, according to this expert, "they would be able to solve the most important problems" of our society.

I am sure that if my students from a decade ago heard me say that in the near future supercomputers will solve our main problems, they would laugh again and affirm that this would not happen. But refusing to see the future does not stop the future. The future arrives whether we are ready or not.

At one time it was said that a boat trip around the world was impossible. It was said that it was impossible for the earth to move. It was said that nothing heavier than air could fly. It was claimed that thinking about going to the moon was crazy. It was believed that in the whole world there was room for only five computers. It was held that there was never water on Mars. (The list is endless.)

Maybe it's time to revise and discard all those beliefs that, no matter where or when we have acquired them, serve only as those blinders that horses wear so that they can only see in a certain direction. In other words, we must open our minds, because the future is not the day after today, but an expansion of consciousness.

But how far does the realm of the supposedly impossible come true? This is one possible answer: direct communications with distant extraterrestrial civilizations. And it's not science fiction.

A few days ago, scientists from the International Center for Radio Astronomy Research (ICRAR) at Curtin University, Australia, claimed that a signal detected 4,000 light-years from Earth could be “proof of extraterrestrial life”.

Order is just the chaos we became used to

A professor I once met in college repeated quite often that order is just the chaos we are used to. In other words, what we consider "normal" is "normal" only because we see it that way or accept it, although it may not be something so "normal" or "ordered" for other people.

For example, I remember the first time I visited a certain city in Latin America where the two main streets of that city intersected at a certain intersection where, at that time, there was no traffic light, and no one was directing traffic. For me, trying to cross or turn at that intersection was, to put it in one word, chaotic.

After several days in that  city and going through that intersection at least twice a day, the chaos began to disappear, and order began to emerge. In fact, there was an order as to which vehicles crossed or turned first and in which direction. And there was something that caught my attention: everyone respected that order and there were almost no accidents.

Although the magic of that intersection no longer exists (now traffic lights have been installed and the presence of traffic police is constant), I always remember that "chaotic" experience because, if I had clung to what I considered a "normal" circumstance at any intersection, then I would never have been able to drive through that intersection.

The opposite is also true: the “normal” way of driving in that city is not at all “normal” in the city where I live and one way and another cannot (and should not) be interchanged.
In other words, concepts like "normal", "order" or "chaos" are always relative, both culturally and historically. The problem arises when we turn them into something absolute and then mistakenly believe that what is "normal" for us is the only true example of normality and all other behavior is "bad" or "wrong".

Since the sense of “normalcy” is absorbed from the parents, the family, and the society around us from early childhood, that “normality” becomes so ingrained in our unconscious that many times we not only do not challenge that notion of “normality” , but we do not even recognize that it exists within our mind.

As Hegel said (paraphrasing), what is known, precisely because it is known, is never known. For this reason, we go through life imposing (consciously or unconsciously) our "normality" on others, and those others impose their "normality" on us. That is, each one believes that their own chaos, which they take as "normal", should be similar for all.

In the world we live in, where problems are astronomically more difficult and complex than crossing an intersection without a traffic light, those unconscious and uncritically accepted ideas of "normality" (own or others), of "order" and "acceptable" not only they are already inoperative, but are, in fact, dangerous, since they easily turn into fanaticism. (There are many examples of that kind of behavior.)

Apparently, order and normality only exist in our minds.


When virtual reality surpasses real reality, little remains of the real reality

Recent scientific reports are interesting and at the same time truly alarming because, although they seem to be taken from a science fiction movie, they are real situations that blur the boundaries between reality and illusion or fantasy

For example, the wildly successful artificial intelligence known as ChatGPT passed the entrance exam to the Wharton School of Economics at the University of Pennsylvania a few days ago. Although ChatGPT did not get top marks (it answered 80% of the questions correctly), it provided “excellent explanations” for its answers.

At the same time, in another experiment, ChatGPT answered enough questions correctly to pass the Multi-State Bar Exam, a multiple-choice test that law graduates must pass before officially beginning to practice law.

As if that weren't enough, ChatGPT also passed (albeit with minimal marks) all three parts of the US Medical Licensure Exam thanks to demonstrating "high levels of consistency and creative ideas in their explanations."

Some time ago, a profile created on LinkedIn using only images and text generated by artificial intelligence was so attractive that (unaware that "Katie Jones" was an artificial intelligence and not a real person) US government officials and people with high level of business and social influence connected with that profile.

"Katie Jones" posed as a representative of a group of experts in Washington DC. That give “her” the opportunity to connect with a ranking official of the United States Secretary of State, with the office of an influential senator, and with a renowned economist.

It is worth mentioning that since that incident the creative capacity of artificial intelligence has progressed so much that currently “images created by AI now look more real than genuine photos", warned in a recent article (January 23, 2023) Dr. Manos Tsakiris, Professor of Psychology and Director of the Center for the Politics of Sentiments at Royal Holloway University in London.

And there are more examples. On January 25, Congressman Jake Auchincloss (D-Massachusetts) gave a speech on bipartisan legislation written entirely by artificial intelligence.

For his part, Rabbi Joshua Franklin, of the Jewish Center of the Hamptons (New York) gave a complete sermon a few weeks ago, created by ChatGPT, on the story of Joseph in Egypt. The sermon was very well received and if Franklin had not revealed that he was not the author of the sermon, his congregation would never have known about it.

In another development showing science fiction has become science fact, scientists from China and the United States invented a melting liquid robot that can escape from a cage.

So, what does this new situation mean for us? According to Dr. Tsakiris, “The transition to a world where what’s real is indistinguishable from what’s not could also shift the cultural landscape from being primarily truthful to being primarily artificial and deceptive.”

From another perspective, the philosopher Markus Gabriel affirms that "we do not evade reality by deceiving ourselves or being deceived with respect to it, since the reality is that from which we cannot distance ourselves."


We are more concerned with what a singer does not say than with WHAT A philosopher SAYS

On social networks I recently found (and without looking for it) a post that included a phrase about friendship, attributing it to Aristotle. The point is that Aristotle never said that phrase. To my astonishment, the next message was a phrase attributed to a well-known singer. Underneath the phrase read: "Things (singer's name) never said."

The unexpected situation made me reflect on the current situation of our society in which thoughts that he clearly never expressed can be attributed to Aristotle, but if thoughts are attributed to a certain singer, then it should be clarified that the artist being quoted never said what is attributed to him.

In other words, one can lie (knowingly or not) with respect to the sayings of one of the founding thinkers of the way of thinking still governing us, but God or the Universe (please, forgive me, Spinoza) will not allow us to lie (knowingly) about the sayings of a famous singer.

Obviously, social media is a kind of lawless land where anything goes. And, in this context, one of the salient elements of the postmodern era appears in all its brilliance: the inevitable presence of post-truth.

As the Spanish philosopher Adela Cortina recently explained, speaking of post-truth does not mean that the truth no longer exists, nor does it mean that the truth has become irrelevant. Post-truth means that lying has been trivialized, banalized. It has become so commonplace that no one (or few) cares anymore.

Lying (in all its forms and on all its platforms) is now something so frequent, habitual, and daily that, even if lying is recognized as such, lying has become something almost insignificant and even "normal". (A recently elected congressman is a clear example of how normal the post-truth has become.)

For this reason, the sayings of Aristotle (or Socrates, Plato, the Bible, Abraham Lincoln, and many others) can be distorted and turned into superficial memes with no consequences and no need for correction for the distortion and misattribution of saying to those persons. 

But if the person being talked about is a famous singer, an actor, an "influencer", then it is better to clarify that the phrase attributed to them was not pronounced by that person. As María Elena Walsh said, we live in the world upside down. And as the tango “Cambalache” (Secondhand Store) says, “An ignoramus is the same as a great professor”.

Post-truth, in short, could be understood as the breakdown of all significant dialogue or, from another perspective, the enthronement of "opinion" (understood as a personal point of view uncritically accepted as valid) as the basis of any attempt at dialogue or conversation.

But, in this way, there is no possible dialogue, because everything is reduced to a succession of interspersed monologues in which each person does not listen to the other, but only downloads information to contradict the other person. This immature attitude of wanting to be right leaves no room for personal creativity or for the communitarian co-creation of the future.

The creator of new technologies does not always understand their uses and consequences

When Philo Farnsworth invented television at the turn of the last century, he did so with the goal of educating those who couldn't attend classes in person, not broadcasting soap operas. And when Mark Zuckerberg created Facebook, his purpose was to share stories and photos with family and friends, not to send hints to your ex or copy memes.

There are other similar examples (the Internet was created as a communications system in the event of a nuclear attack), but the big ones are needed to come to this conclusion (which is not mine, as a few paragraphs will make clear): the inventor of a technology is not always the best judge of the usefulness or uselessness of that technology.

That debate reached a new level of urgency because of a recent report released by four universities (Tennessee, Colorado, Michigan, and Florida International) on how the emergence of new technologies, from steam engines to artificial intelligence, has an unanticipated impact on people's lives and jobs.

More specifically, artificial intelligence systems such as ChatGPT (which can create entire essays in a matter of seconds from a question) or DALL-E (which generates images from a short description) appear to have a negative impact on human creativity and on the way we communicate.

“These new AI tools also have downsides. First, they could accelerate the loss of important human skills that will remain important in the coming years, especially writing skills,” said study co-author Dr. Lynne Parker of the University of Tennessee.

Parker's mention of the possible loss in the next few years of the ability to write immediately made me think of a similar debate, although focused on the negative aspects of writing, that Plato puts into the mouth of Socrates at the end of his dialogue Phaedrus (274d-275b).

For our purposes in this column, it suffices to say that Socrates narrates an Egyptian myth in which Theuth (the Egyptian god of the underworld) offers King Thamus one of his inventions, writing, which he describes as a kind of “ drug” that makes human beings wiser and wiser.

But Thamus rejects that offer, stating that writing is a "drug" will lead people to "neglect memory" and believe that, because they have read things without learning them, they became "omniscient", when in fact "they are totally ignorant" and “only appear to be wise”.

In fact, Thamus reproaches Theuth for having decided by himself (regardless of Theuth being a god) the benefits of his creation, because "the inventor of a technology (technique, art) is not always the best judge of the usefulness (benefit) or uselessness (harm) of that technology.”

Two and a half millennia after that myth and that debate, we clearly have not advanced an inch from the truth expressed by Plato, because now, perhaps more than ever, arrogant ignorance and apparent wisdom predominate. Did I mention that the “most important goal for artificial intelligence is understanding what it means to have a mind”? (Article by neuroscientist Michael S.A. Graziano in the WSJ).

View older posts »