Menu
header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future

Blog Search

Blog Archive

Comments

There are currently no blog comments.

The Moose, the Seal, and Artificial Intelligence: Recognizing New Dangers Before It’s Too Late

Some years ago, I read the sad story of a moose that, somewhere in Canada, was struck and killed by a train. According to the report, the moose was running along the tracks in the same direction as the train and, despite the engineer’s desperate attempts to avoid the collision, the animal never stepped aside. The story ended with this question: Did the moose not see the train?

The simplest answer is no, the moose never saw the train. Experts explained that it was a young animal with no prior encounters with trains, and therefore it could not react as it should have to save its life. In the Canadian forests, moose know how to avoid natural dangers, but not trains.

That inability to recognize danger without prior experience can also be found in young seals in the Arctic. When they see a polar bear for the first time, instead of fleeing, they often stay put—with tragic consequences for the seal. If the seal survives, or if it witnesses the incident, the next time it will know to escape.

These examples from the animal kingdom led me to reflect on the possibility that we humans might also find ourselves in situations where something dangerous—whether living or mechanical—approaches us without our awareness, leaving us unable to take precautions to avoid it.

This thought, in turn, reminded me of a recent interview with the Spanish philosopher Francisco Javier Castro Toledo (who is also a criminologist and ethics expert). He warns that humanity has not yet fully grasped or assessed the dangers and challenges that artificial intelligence poses to the present and future of humankind.

One of the reasons is that these dangers and challenges, as far as we know, are appearing now for the very first time in human history. There are no exact or even close historical precedents to serve as reference points—much like the moose facing a train for the first time or the seal in its first encounter with a polar bear.

According to Castro Toledo, despite its benefits, AI could “violate human dignity and restrict individual freedoms,” as well as undermine “people’s ability to make their own decisions,” with “very harmful” consequences—including loss of data privacy and an “educational divide between those who have access to the latest technology and those who do not.”

Castro Toledo enumerated three “ethical dangers” of the AI: “We can organize them, non-exhaustively, into three broad areas. First, the ethics of algorithms. Second, the social impacts. Finally, the impact on personal autonomy.” He suggested establishing “comprehensive regulatory frameworks capable of harmonizing the interests of all those affected.”

Whether or not we agree with Castro Toledo’s assessment of AI (and he is a recognized expert in the field), perhaps it would be wise to step back from the danger—something neither the moose nor the seal managed to do in the examples above—before it is too late and we discover that this Spanish philosopher was right all along.

Go Back