Suppose you ask the so-called artificial intelligence (“AI”) why you like your favourite band. It will look at other bands you like, find the one closest to your favourite by predefined category parameters, and spit out that band name as an answer. It won’t be entirely wrong – the two have some connection, and we like them both, but it’s also far from correct.
One band isn’t the source of the feelings another one generates in us; besides, it wasn’t what we asked.
The output of “AI”, or a large language model, is always slightly wrong in a mindless way.
Regurgitated intelligence #
Chatbot-like interfaces were a shitty way to get unverifiable information in the nineties, and it still is. ChatGPT, or any similar program, depends entirely on humans feeding it information en masse, regardless of sources, quality or how it is treated and regurgitated. Interaction is like reading a snippet from a poorly written, unedited article.
If I aligned and agreed with “AI” text output regarding my field of work, I would be very disappointed. At the same time, disagreeing with it is just as disappointing because it doesn’t provide any food for thought. There are no new opinions, arguments or takes – it’s a skewed echo of what I’ve heard hundreds of times.
No job for “AI” #
The “AI” answers like humans when we are surprised by a difficult question. We start by reshaping it before going off on a minute-long ramble to cover all our bases.
Myriads of “however” and “while” badly conceal the dissonance in the program. These words are becoming a sure sign that someone has not bothered writing the text, and so we shouldn’t bother reading it. If it were a job interview, I would tell the person to take a deep breath and assure them that different opinions or a wrong answer isn’t a reason for not hiring them. I’m interested in the demonstration of independent thought. And “AI” is incapable because it isn’t intelligent, doesn’t have a brain and cannot take in new information.
Dull machine rehash #
Like “AI” images (it’s not art), the text output is entirely devoid of originality. No matter how we pose a question, using different words or angles, we get the same replies in the same format. It’s unable to reason, unable to figure out what I’m after, but also unable to give up. The program constantly rehashes the same answers, where a human would adjust themselves or ask me to be more upfront.
It quickly becomes incredibly dull. Returning the most common, bland and uninteresting takes on a topic you find interesting, the absence of creativity and inability to reason dooms it.
And we’re not talking about the nail polish emoji type of creativity; we’re talking about the kind that requires conscious thought based on experience and understanding of a problem. The machines cannot make new forms, styles and thoughts because it isn’t aware of humans or the world around us. It cannot experience, only be fed.
Sourceless mirror #
The program comes without any way to validate the information it provides. It claims to have no direct access to its sources while assuring us that they include books, websites and articles.
When confronted with this, the program produces excuses! It was created this way. Of course it was. It’s almost as if it claims to be some kind of truth in and of itself while pointing to its creator for further evidence. Like other publications making the same claim, “AI” doesn’t spring into existence; it’s a selective mirror created by humans.
Without references, it’s useless as a source of information; it’s just the newest thing that spits out selected, outdated conventions on record. It’s not difficult to imagine how incredibly wrong, prejudiced and misguiding it will become as it is fed its own crap down the line.
Creepy Bob again #
An interface is not a person and a chatbot is not a colleague. The human imitation is as creepy as it is annoying; I refuse to address it as “you”.
But it’s ten minutes of sporadic fun, except that those ten minutes were in 1995 and was a failure called Bob. When asked about the difference between itself and Microsoft Bob, the program answers:
“I am designed to provide accurate and reliable information […] while Microsoft Bob was intended to be a user-friendly interface for managing computer tasks and accessing information.”
Quite the difference.
Text and image generators can categorise the superficial properties of whatever mulch their creators feed them. But they cannot yet record or understand our feelings, which would have to be put into some human-made context before it can be misused anyway.