AI machines do not have “hallucinations”. They have algorithmic crap

By | May 9, 2023

Amid the many debates surrounding the rapid spread of so-called artificial intelligence, there is a relatively obscure skirmish centering on the choice of the word “illusion.”

With this sentence Naomi Klein begins a very interesting opinion piece in The Guardian and continues.

This is the term that AI architects and advocates have come up with to describe responses provided by chatbots that are completely made up or completely incorrect. Like, for example, when you ask a bot for a definition of something that doesn’t exist and, quite convincingly, it gives you a definition, complete with made-up footnotes.

Photo: Julien Tromeur / Unsplash

With ready material

“Nobody in this field has yet solved the problems of illusion,” Sundar Pichai, CEO of Google and Alphabet, said recently in an interview.

This is true, but why call errors “hallucinations”? Why not algorithmic crap? Or malfunction? Well, hallucination refers to the mysterious ability of the human brain to perceive phenomena that don’t exist, at least not in conventional materialistic terms. Appropriating a word commonly used in psychology, psychedelia, and various forms of mysticism, AI advocates, while acknowledging the failure of their machines, at the same time feed into the field’s most cherished mythology: that by creating these grand models of language and its training in all that we humans have written, spoken and visually represented are in the process of giving birth to a living intelligence that is on the cusp of an evolutionary leap for our species.

Distorted hallucinations, however, do occur in the AI ​​world, but they are not transmitted by robots, but by the CEOs of technology that triggered them, along with a phalanx of their followers, who find themselves in the midst of wild hallucinations, both individually and collectively. Here I define hallucinations not in the mystical or psychedelic sense, navigating mental states that can actually help access deep, previously unperceived truths. No. These guys are just freaking out: they see, or at least claim to see, items that don’t exist at all, and even conjure up entire worlds that will use their products for our global education and betterment.

The solution for everything

Genital AI will end poverty, we are told. It will cure all diseases. It will solve climate change. It will make our jobs more meaningful and exciting. It will free lives of leisure and contemplation, helping us recapture the humanity we have lost to the mechanization of late capitalism. It will end loneliness. It will make our governments rational and flexible. These, I’m afraid, are the true illusions of AI and we’ve all been hearing them repeatedly since GPT Chat started late last year.

This is what Naomi Klein writes and continues.

There is a world in which artificial intelligence, as a powerful tool for predictive research and performing tedious tasks, could be used to benefit humanity, other species, and our common home. But for this to happen, these technologies will need to be developed within an economic and social order very different from our own, one that will aim to meet human needs and protect the planetary systems that support all life. .

But as we understand, our current system has nothing to do with it. Rather, it is built to maximize the extraction of wealth and profit, both from humans and from the natural world, a reality that has led us to what we might consider the “techno-necro” stage of capitalism. In this reality of hyper-concentrated power and wealth, artificial intelligence, far from living up to all these utopian illusions, is much more likely to become a terrifying tool of more plunder and plunder.

Because what we are seeing are the richest companies in history (Microsoft, Apple, Google, Meta, Amazon…) unilaterally hoarding all the human knowledge that exists in digital form and locking it into proprietary products, many of which will directly target the people whose own work “trained” the machines without giving their permission or consent.

This shouldn’t be legal.

Photo: Gerard Siderius/Unsplash

Theft of our work

Painter and illustrator Molly Crabapple is helping lead a movement of artists condemning this theft. “Artificial intelligence generators are trained on massive data sets, containing millions of copyrighted images collected without the knowledge of their creator, let alone compensation or consent. It is essentially the largest art theft in history. It is perpetrated by seemingly reputable corporate entities backed by Silicon Valley venture capital. This is daylight robbery,” she says in a new open letter she wrote.

The trick, of course, is that Silicon Valley often calls theft “disruption,” and very often gets away with it. We know this move: rushing into lawless territory, claiming that the old rules don’t apply to your new technology, shouting that regulation will only help China, all while having your facts firmly on the ground.

By the time we all get over the novelty of these new games and begin to take stock of the social, political, and economic disasters, the technology is already so pervasive that the courts and legislators are giving up.

We saw this by scanning Google Books and Art. With the colonization of space by Elon Musk. With Uber’s attack on the taxi industry. With Airbnb’s attack on the rental market. With the carelessness of Facebook with our data. Don’t ask for permission, disruptors like to say, ask for forgiveness.

Are we finally comfortable?

In The Age of Surveillance Capitalism, Shoshana Zuboff details how Google’s Street View maps circumvented privacy rules by sending their cars with cameras to photograph public streets and the exteriors of our homes. By the time the privacy lawsuits began, Street View was already so ubiquitous on our devices (and so attractive and convenient) that few courts outside of Germany were willing to intervene.

By now, most of us have heard of the survey that asked AI researchers and developers to estimate the likelihood that their advanced systems would cause “human extinction or a similar severe and permanent weakening of the human species.” Chillingly, the average answer was that there was a 10% chance.

How can one rationalize going to work and promote tools that carry such existential risks? Often the reason given is that these systems also carry enormous potential benefits; only these advantages are, for the most part, illusory.

*With information from Naomi Klein’s op-ed, published on

*Naomi Klein is a columnist and writer for the American Guardian. She is the best-selling author of No Logo and The Shock Doctrine and a professor of climate justice and co-director of the Center for Climate Justice at the University of British Columbia.

Follow on Google News and be the first to find out about all the news

Leave a Reply

Your email address will not be published. Required fields are marked *