Menu
header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future


15 years  of Archives

WEEKLY COMMENTARY

DISCLAIMER

The commentaries we share here are merely our thoughts and reflections at the time of their writing. They are never our final word about any topic, nor they necessarily guide our professional work. 

 

Is there any room left for us in the future now that our digital twins have arrived?

Many years ago, back in the late 1980s, I first read an article that mentioned the “Computerizer,” described as a non-human character from the future with the ability to create realistic replicas or copies of any human being. , whether contemporary or from the past. That science fiction is now reality.

The recent virtual concert by ABBA and the announcement by the rock band KISS a few days ago that from now on that band will be replaced by its digital twins confirm that our digital copies are here not only to stay, but to replace us. And that situation creates countless philosophical and existential questions.
 

As Ortega y Gasset would say, I am me and my circumstance, emphasizing both the inevitable duplication of the “I” and the inevitable contextualization of that I in a sociopolitical, historical-cultural, and even geographical framework.
 

From that point of view, even if an avatar or digital twin looks like me, talks like me, and even reacts and responds like me, it can hardly be said that that avatar is “embodied” or “incorporated” in a certain historical or cultural circumstance. In other words, my digital twin is not me nor is it an extension or expansion of me (at least, for now).
 

Furthermore, it is doubtful that digital twins, even if they hypothetically had self-consciousness, could have “my” consciousness. Replicating my thinking process does not mean duplicating my consciousness. Pretending to have my emotions, or even having similar emotions, does not mean having my emotions. Simply put, my mind is still my own.
 

And what about the ethical aspects of avatars? What happens if my digital twin, without my knowledge and even with supposedly good intentions, commits a crime or creates a serious problem? Will I be responsible for what my digital twin does just because he or she looks like me or is a copy of who I am (or think I am)?
 

At the same time, what if the reason I want to have a digital twin is to commit crimes and then accuse my copy of having committed them? Can digital twins prevent bad actors from facing the consequences of their actions?
 

Another great challenge is the issue of human relationships, which, by definition, will no longer be solely human. For example, there may be an event or meeting that I prefer not to attend. Can I  send my avatar and let it speak as I would and decide as I would, but without being there myself? Will I later accept their decisions?
 

If digital twins can predict or simulate human decisions, a debate could arise about human freedom versus determinism. 
 

Finally, as is clear from current circumstances, many human beings will prefer to interact only with digital avatars, but not with real people, with the consequent impact on social relations and community cohesion.
 

Digital twins and other technological advances force us to reconsider and reevaluate all our fundamental philosophical ideas about the nature of humanity, life, identity, and all of reality.

 

We have learned so much to doubt everything that, by believing in nothing, we believe in everything

About 2300 years ago. the Greek philosopher Aristotle taught that to determine if something was real, three elements were needed: that our senses were functioning properly, that there were no external disturbances or obstacles restricting our senses, and that other people had the same perceptions in the same place and context.
 

Then, back in the 15th century. Nicholas Copernicus proposed that the Earth (and the other planets known at that time) revolved around the sun, or, more specifically, a point close to the sun. But for many people it was difficult to accept heliocentrism due to a great obstacle: Aristotle.
 

After all, if our senses are working properly and it is a clear day, we can see the sun rise in the east and set in the west. Furthermore, every other person in similar circumstances can see the same thing. Therefore, following Aristotle, the only possible conclusion was that Copernicus was wrong.
 

But what the Copernican revolution taught us was not to stop trusting Aristotle, but to stop trusting our senses. In other words, what previously seemed like a guarantee of reality (“seeing is believing”) was no longer so. And there was something else: if Aristotle, the great Greek philosopher highly esteemed during the Middle Ages, was wrong, who else was?
 

Back in 1517, the German monk Martin Luther offered his own answer to that question: everyone. The so-called Protestant Reformation, with Luther as its highest expression, did not only mean changing one doctrine for another, but also leaving aside 15 centuries of religious tradition. In other words, tradition was no longer seen  as a repository of truth and wisdom.
 

Now, if we can no longer trust our senses or tradition, then what can we trust? Oversimplifying, it could be said that, following Luther, each of us can only trust in ourselves (“salvation” is individual) and in our capacity for reasoning.
But back in the 17th century, Descartes, with his methodical doubt, cast doubt on all our knowledge. And in the 19th century and part of the 20th century, Marx, Nietzsche, and Freud, each in their own way, taught us to be suspicious of rationality and of the (supposed) truth of modernity, as Paul Ricoeur explained in 1965.

 

As if all of this were not already enough for our entire understanding of reality and ourselves to collapse, Darwin set in motion the process of dethroning the human being from the pinnacle of creation, and 20th century astronomy removed us from the center of the universe by accepting that the Milky Way is just another galaxy, not the entire universe, as it was thought a century ago.
 

And now AI is rapidly becoming yet another threat to us. 
 

So, is there any solid foundation left? No.  But that does not mean falling into a trivial nihilism, but rather the current situation should be seen an invitation to develop a new (quantum) narrative, understood as a shared history of transmissible and meaningful experiences. Why? Because narrative elevates us above disorientation and superficiality.

 

The great narratives of the past are over: now we only have fragmented narratives

I recently entered the gym and, as I do every day, I looked on my phone for the app that contains the barcode to mark my presence at the gym. Only that day the application did not open. Seeing my predicament, the young receptionist, before I said anything, said: “If you don't know how to use your email, I'll teach you. Is not difficult".

I preferred at that moment to ignore her clearly prejudiced attitude and moved to a nearby table to wait for the app to finally open. The young receptionist approached me and told me that it is normal to forget how to access email or not know how to use it. And she repeated that she could help me.
 

At that moment, the barcode finally appeared on my phone and, using the corresponding scanning device, I entered the gym without problems. I didn't say anything to the receptionist because, although I'm always interested in reducing the level of prejudice in the world, this was neither the time nor the place to do it. But obviously I'm still thinking about it.
 

Reflecting on the incident several key elements became clear. And they are the same elements or ways of thinking (or rather, of not thinking) that we encounter again and again in our lives, often without recognizing them. For example, this young receptionist (although her youth does not excuse her attitude) assumed that I was the problem.
 

In the context of therapeutic narrative, the phrase “The person is not the problem. The problem is the problem” is frequently used as reminder that the humanity of a person should be respected no contraption or gimmick should be used to disrespect the capacity or dignity of a person, no matter what the problem is. But that approach no longer exists in our society.
 

Therefore, the receptionist in question, without any concrete basis for doing so, assumed that I did not know how to use the phone instead of understanding (without assuming anything) that the app required more time than usual to open. That leads me to another worrying element: believing that technology, be it a smartphone or artificial intelligence, is never wrong.
 

This almost blind trust in technology and this blatant disregard for human dignity arise, I believe, from the feeling of disorientation we feel living in this constantly changing world. We propose to understand “disorientation” as the loss of meaning and direction in our lives.
 

In this context, every narrative is fragmented, and dialogue becomes impossible, thus giving rise to the unpleasant situation in which, beforehand (that is, pre-judgment) humans are “pigeonholed” and “catalogued” into certain categories. and all our trust is placed in “intelligent” technology with which we have learned to create useless “works of art” with little or no creativity.
 

But what is a fragmented narrative? It is a narrative that no longer reflects reality, nor can it be self-correcting. It only reflects our inner fragmented, disoriented world, thus living no room for wisdom or the future.

 

The future calls us: understanding the signs on our path

Recently, a tragic accident happened on a highway in my town. An elderly man in his nineties entered the highway in the wrong direction and collided head-on with another vehicle. Tragically, both drivers lost their lives. This sad event may have an important message for us.

Local journalists spoke with several drivers who narrowly missed the wrong-way vehicle. They all said they didn't understand the warnings from other drivers and the police, so they ignored them. One driver told the media he thought he was being warned about a speed trap and dismissed it as a mistake or joke. Another thought he was being waved at and waved back. A third didn't understand the meaning of rapidly flashing headlights.
 

If we're honest with ourselves, a rare thing in today's world, we must admit that we often miss signs of things coming our way, not all of them bad. The term "presage" (omen) used to be common for a sign that predicts an event. Presage literally means "sniffing the future." Signs of what's coming are often felt, not seen, or understood.
 

We're not taught to read signs of the future and often disregard lessons from the past, leaving us stuck in the present. We tend to ignore these signs until it's too late and we're confronted with a new reality.
 

So, what messages is the future sending us now? How many are we overlooking because we think we know better, or dismiss them as bad jokes or nonsense?
 

Think about climate change, the Earth's overpopulation, ecological destruction, endless wars, and the militarization of space. And consider the potential of artificial general intelligence by 2030, which could be beyond human control and understanding.
 

What more signs do we need? None, clearly. But by not being open to new possibilities, by denying the future's existence or its knowability, we keep missing the warnings. We think life will always be smooth... until we're suddenly faced with a new, unavoidable reality.

 

We did it! The war on earth has already reached space.

It turns out that in one of the many regrettable wars that are currently being fought in this world (it doesn't matter which war it is, because they are all wars) a missile from one country shot down another missile from another country in space. According to several journalistic reports, it is the first time confirmed that a war on earth reaches space.
 

We should all be very proud of having achieved what the opening scene of 2001: Space Odyssey already anticipated: yesterday we were cavemen throwing bones at each other and today we are still cavemen but throwing missiles at each other in space. It is clear that, if this trend continues, in a short time we will be fighting on (and destroying) other planets.
 

All sarcasm aside (which, in fact, is more a lament), it seems that it is not enough for us to ruin the planet and desacralize the few sacred places that remained (if any), but now we must also star fighting in and through the space surrounding the earth. And then we will surely fight for the asteroids and the planets.
 

Perhaps this incomprehensible impulse to self-destruct ourselves and everything we touch, as well as this senseless vocation to see everything and everyone around us as raw material with a commercial value (the asteroid 16 Psyche is valued in thousands of trillions of dollars) are the reasons why intelligent beings from other planets do not visit us.
 

Perhaps Milan Kundera was right when in The Unbearable Lightness of Being he stated that everyone on this planet is a beginner, indicating that if we are here (at any time in history), we are here because we have not yet learned the lessons we should learn to no longer be here.
 

Meanwhile, we continue madly repeating the same cycle of self-destruction over and over again, implementing programs and developing actions that do not benefit any living being on this planet (including the planet itself), or that perhaps benefit only a few who care little about the consequences of their actions.
 

This is not (not even remotely) a conspiracy theory, but a reality seen time and time again throughout human history. For example, the itinerant preacher known as Paul confessed two millennia ago about his inability to avoid doing the evil he did not want and his inability to do the good he did want.
 

And in 1636, Calderón de la Barca reminded us in Life is a Dream how deep our self-deception is, stating that “The king dreams that he is king, and lives / with this deception commanding, / arranging and governing…” and that life is "An illusion, a shadow, a fiction…"
 

And in 1784 Kant lashed out in What is Enlightenment? to his contemporaries for living in a “self-caused immaturity”, understanding “immaturity” as “the incapacity to use one's intelligence without the guidance of another”.
 

We did it! We took war to space, but whether we like to admit it or not, we are still cavemen.

 

Not every change is progress nor is every new thing an improvement

I recently spoke with the manager of a small business who told me that he had to update the computer program he uses to manage his company, from the database with customer information to every sale made and every payment made. And the result was disastrous.

After several months of back and forth with the software provider company, as well as dozens of visits by that company's technicians, the new program was finally installed. And the best thing the installers could do and get the program to work at “80% efficiency,” the manager told me.
 

Among other problems, the customer database was not transferred in its entirety. Printers couldn't connect to computers, and when they did, they printed the same material multiple times. And tasks that once required a single step now require multiple steps, some of them somewhat complicated.
 

Even worse, since the installation of the new program has already been done, it was no longer possible to go back to the previous version. And there was no way to know when the new program will work one hundred percent, or even if it ever will. Meanwhile, the manager said, his business “suffered” because he could no longer serve his customers as quickly as he used to, and because of that, he was losing customers.
 

I think each of us has gone through similar experiences when, for example, an update (unsolicited, of course) is made to the operating system of the computer or smartphone. And it turns out that the computer or phone worked better before the update than now.
 

At another level, that well-known saying that says “We were better before when we were worse” is reflected, for example, in government decisions to launch a new health or education plan, or a new program to combat this problem or that. But usually, the results are disastrous, and things are worse than before the “solution” arrived.
 

And at a global level, we can say that all of humanity is committed, intentionally or blindly, to achieving precisely the opposite results to those sought, causing greater results than those that previously existed. To quote another saying, the cure is worse than the disease. In fact, in many cases, the cure is much worse than the disease.
 

The problem is obviously not new. Two millennia ago, Saul of Tarsus (Paul) lamented that he did evil he did not want to do, but he did not do the good he did want to do. It seems that we have made little progress since that time and, perhaps, the only difference at this time is that contrary to what Paul expressed, more and more people do want to do evil.
 

It should be clear that I am not suggesting either going back to the past (it is impossible) or rejecting change (it is also impossible). But not every change is beneficial nor does every “improvement” or “update” really mean progress. Let's therefore stop deceiving ourselves by thinking that more technology is good for us. 

 

What knowledge already exists today that we will only understand centuries or millennia from now?

A recent report, by Egyptologist Victoria Almansa-Villatoro and published in Smithsonian Magazine, confirms that the Hittites (in today’s Turkey) gave rise to the Iron Age 3,300 years ago by inventing the procedure necessary to separate that metal from other minerals.

Despite the revolutionary and beneficial nature of this advance and the new knowledge obtained (for example, knowing how to create and control temperatures above 2300ºF), this practice took about 700 years to expand to other regions until it finally reached Egypt and other ancient cultures.

This information, provided by Almansa-Villatoro, led me to ask two questions: what knowledge developed 700 years ago (that is, in the 13th and 14th centuries of our era) is only now beginning to be known in our society? And at the same time, what current knowledge will only be known and shared in 700 years, in the 28th century?

But as I continued reading the article, I came across another example of knowledge lost in time not for centuries, but for millennia.

It turns out that recent research in the pyramid of Pharaoh Unas (or Unis), who reigned from approximately 2465 to 2325 BC, led to an astonishing discovery: the Egyptians knew about 4400 years ago that meteorites fell from the sky or, to put it another way, that meteorites were (are) literally extraterrestrial objects.

And the Egyptians reached that correct conclusion 4,200 years before European and American scientists accepted at the beginning of the 19th century (specifically, 1833) that meteorites reach the earth and not, as was long believed, jump from somewhere in the earth. the earth and then fall into another.

How do we know that the Egyptians were thousands of years ahead of modern scientists? Because they say it explicitly in an inscription on the ceiling of the Unas pyramid in Saqqarah, the first pyramid with texts inside. The inscription reads: “[The king] Unis seizes the sky and splits its iron.”

As Almansa-Villatoro explains, “this knowledge died with the ancient world, along with the associated myths, languages, writing systems and rituals.”
 

In that context, one may wonder whether it might not be the case that there is some other ancient knowledge, trivial or profound, written somewhere, which we have not yet discovered or understood, and which perhaps will be of benefit to us.

And at the same time, what knowledge and wisdom are we developing at this moment so that, if it were rediscovered in more than 4,000 years, it would be celebrated for having been cutting-edge wisdom?

Be that as it may, this whole situation of lost and recovered knowledge generates all kinds of possibilities. For example, the rediscovery of ancient Greek and Roman works of art and manuscripts about half a millennium ago gave rise to the Renaissance, the consequences of which we are still experiencing.

If our current wisdom were lost and rediscovered 1,500 years later, would it spark a renaissance of civilization? I doubt it. I don't believe we are building anything at that high level of eternity.

Will there be a rebirth of humanity that will allow us to avoid the end of humanity?

For millennia, and perhaps since its very origins, humanity has walked on the edge of the abyss. And in times like ours, we even look into the abyss and feel vertigo because, as Nietzsche already explained, “If you stare into the abyss, the abyss stares back at you.” (Beyond Good and Evil, 146 ).

But sometimes, even after understanding that the greatest abyss is the one within us, sparks arise, mere moments of hope that invite us to think that a rebirth, a new launch of humanity is possible. And perhaps the Herculaneum manuscripts, burned almost two millennia ago by the eruption of Vesuvius, are the beginning.
 

In 1752, some 800 manuscripts were found in Herculaneum, near its better-known sister city, Pompeii. But only now, using new technology and artificial intelligence, have experts begun to read those manuscripts. In fact, only a single word has been read: “purple,” which may be a reference to the color, clothing, or the official wearing a cap or a hat with that color.
 

Reading that single word required 20 years of work for experts at the University of Kentucky, who stated that from now on the process of reading that and other manuscripts will be accelerated.
 

Since the manuscripts were part of the personal library of the philosopher Philodemus (a follower of Epicurus), it is believed that among the manuscripts would be found numerous now-lost works by ancient Greek and Latin authors, such as the tragedies of Sophocles or the books of Livy. on the history of Rome, or the writings of Lucretius or Catullus, among others.
 

In that context, Dr. Robert Fowler, a papyrus expert at the University of Bristol in England, recently indicated in an interview with the New York Times that reading Philodemus' library “would transform our knowledge of the ancient world in ways we can hardly imagine.”
 

In fact, he said, the only possible comparison is the rediscovery of ancient manuscripts that led to the European Renaissance during the 15th and 16th centuries (although it began two centuries earlier), giving birth to modernity, an era that is now ending and from which we are leaving.
 

It may be an exaggeration to say that reading a single word written in Greek in a manuscript burned 2,000 years ago can lead to a rebirth of humanity, that is, to a new way of understanding ourselves, others, the planet, and the universe. But perhaps this small step is the proverbial mustard seed that will later grow to a large size.
 

Perhaps if we all look seriously at the past, rediscovering all that wisdom that did not reach us (whether intentionally or due to the whims of history), will help us rethink the consequences of our actions and our way of thinking and, therefore, to change them.
 

Obviously, one could argue that if all existing wisdom has not yet managed to bring about that change, neither will the books of Philodemus. However, a spark was lit, a step was taken, and hope was reborn.

 

The appearance of knowledge leads us into dangerous self-deception

Recently, an acquaintance told me that, in his childhood, he was forced by a matter of family tradition to learn to read aloud the language of his ancestors. After several years, he finally managed to do it. And although today, now in his sixties, he can continue to faithfully repeat many of those readings, he never learned their meaning.
 

“I am sure that, when I was speaking that language, I have said many important things and surely many nice things. But to this day I don't know what I said. They taught me the language, but not the meaning,” this acquaintance explained to me.
 

The situation, although generated in another time and in another context, caught my attention because it adequately depicts our current reality: we can pretend that we are reading something, we can pretend that we are saying something, and we can even pretend that we are communicating, but we are neither know nor understand nothing we are saying.
 

And if you think I'm exaggerating, let me remember the well-known phenomenon of “instant experts”, that is, those who, after watching a video on social networks or receiving a response from some generative artificial intelligence, already present themselves as “experts” on a topic about which they know or understand nothing, but which they repeat as if they knew.
 

In the case of the acquaintance with whom I spoke, he was at least aware that he did not know the meaning of what he was reading or saying. But in the case of these “instant experts,” dedicated to selling mostly useless knowledge, their self-deception reaches such a level that they not only believe they know, but they believe they can impart their supposed “wisdom” to others.
 

It has been said (I don't remember who) that there is something worse than ignorance and that is the illusion of knowledge. And that illusion of knowledge prevails in our time and is expressed in various ways. 
 

For example, people say “I saw it on TV”, or “They posted on social media”, or “I watched the movie”. This is just an illusion of knowing, that is, not recognizing our ignorance and clinging to unfounded knowledge. And it leads to locking oneself inside an “echo chamber” where we only accept what coincides with what we believe and reject anything else. 
 

But neither the world nor history really care about what we believe or how many videos we watch every day to feel informed and wise. Things (all things) constantly change, and it seems clear to me that the only thing that does not change is our fierce determination to deceive ourselves by repeating words and phrases of which we do not know the meaning.
 

Maybe it's time to do what the acquaintance I spoke to did: be honest with ourselves and realize that we really don't know anything and never did. Perhaps then we can begin to have those creative and generative dialogues that Socrates loved so much and that we so urgently need today.

 

 

Are you ready to certify yourself as a human? In a short time, it will be a requirement

Artificial generative intelligence (AGI) has advanced so rapidly over the past few months that it appears that in the near future humans will need to be certified as humans if they want to claim copyright for their creations, which they will also need to demonstrate that those creations. They are original and without contributions from IAG.

In other words, the IAG has made current copyright laws obsolete in a short time and, at the same time and for that reason, has forced experts not only to rethink those laws but also to rethink what human creativity consists of and, as a consequence, how humans are defined to certify that they are (we are) human.

Although the so-called Generative Era began about 10 years ago, according to Shelly Palmer (a renowned expert on the subject), it has only now reached the point where it is not only necessary to rewrite copyright laws, but also to distinguish between different levels of creativity, depending on the level of IAG involvement in the creative process.

That means, for example, that only a work created entirely by a human and without intervention by the IAG would be eligible to receive copyright, as long as the human is fully human, that is, lacks cybernetic elements in its body (such as augmented vision) that could have helped him create his work.

Numerous questions then arise. For example, how will one verify that a creation is fully human? Does there have to be a witness present at every step of the creative process? Does that witness have to be human or could it be artificial intelligence?

Furthermore, if an entirely human creation qualifies for its own copyright protection, then will a creation made entirely by the IAG qualify for those same rights? And what will happen to mixed creations? Will they lack rights?

There are still many more questions. For example, for a product to be presented as “Made in the United States” (or any other country), that product must have a certain percentage of elements from that country or be assembled in the country. But it does not need to be 100 percent created in the country. Will the same happen with mixed creations? Will there be established percentages?

Will there be different rights depending on whether the products are “fully human”, or “fully synthetic” or “derivative”? And how will humans prove that they are truly and fully human? Birth certificate? Medical exams? Perhaps a supervised creativity test?

Furthermore, if we accept that the IAG's creations are truly creations, do we then stop seeing the IAG and artificial intelligence in general as mere tools at our disposal and instead see them as our equals (or rivals) in creativity? And another thing: what can humans create now without the participation of some kind of technology?

Perhaps the focus should not be on adapting laws from the past to the present, but on creating a new future in which, regardless of technology, human creative capacity is cherished and celebrated.

 

View older posts »