A few days ago, I read an article (I'm intentionally not going to give other details) in which an expert on the subject of AI stated that, due to algorithms, less and less information can be shared on social networks since almost any message that one shares can be automatically removed. I wanted to share that article and my message was automatically removed.
I don't know if that unexpected and instantaneous end to my minuscule attempt to express something against AI and social networks was due to the content of the article (without any controversial and highly professional content, I must add), but I will never know, because it turns out that the decision to remove him is not only inexplicable (that is, it does not offer any explanation), but also unappealable.
So, one is left not knowing what happened, what element of the message was not accepted, why it was not accepted, and who made the decision not to accept it. I only know that it is some kind of power or force, invisible to mere mortals, that does what it does (censor) for the “benefit of the community.” Obviously, neither the benefits nor what is meant by “community” are explained.
This idea of a hidden but at the same time quasi-omnipresent, omniscient, and omnipotent AI that decides what can be said and thought and what cannot be said, or thought is an idea that I find not only worrying, but decidedly horrifying because it is about reduce thought through coercion, eliminating all “undesirable” thoughts, something we already know in the history of humanity.
At another level and overcoming all distances and comparisons, train, or program AI so that, based on existing data, it perpetuates social prejudices about what is acceptable and what is not acceptable and, in this way, a single way of seeing reality is promoted (a kind of “algorithmic orthodoxy”) is too much like an Inquisition.
Obviously, I am not saying that algorithms are a new incarnation of the Inquisition, but it bothers and alarms me that the methods used to compile information have too many common elements, such as anonymous complaints, secret surveillance and immediate decisions that cannot be appealed, all of which are opaque practices for the general public, but with the potential to irreversible ruin lives and futures.
It should be clear that we are not equating social media with the detestable oppression and religious brutality of other times, but it is clear that the algorithms are biased, and it is also clear that the ethical principles and governance models of AI are not entirely clear, perhaps because they are all designed and supervised by only a handful of companies.
Perhaps it is time to revisit the past to learn what kind of sacred relics to use to avoid falling into either unhealthy paranoia or repressive self-censorship in this new context in which the capabilities and abilities of new technologies seem aimed at reflecting and enforcing the prejudices of who “govern” these technologies.
Comments
There are currently no blog comments.