Halvor William Sanden

The dissonant, unverifiable, dull output of ChatGPT

Suppose you ask artificial intelligence (AI) why you like your favourite band. It will look at other bands you like, find the one closest to your favourite by predefined category parameters, and spit out that band name as an answer. It won’t be entirely wrong – the two have some connection, and we like them both, but it’s also far from correct.

One band isn’t the source of the feelings another one generates in us; besides, it wasn’t what we asked.

The output of AI is always slightly wrong in a mindless way.

Regurgitated intelligence #

Chatbot-like interfaces were a shitty way to get unverifiable information in the nineties, and it still is. ChatGPT, or any similar program, depend entirely on humans feeding it information en masse, regardless of sources, quality or how it is treated and regurgitated. Interaction is like reading a snippet from a poorly written, unedited article.

If I aligned and agreed with AI text output regarding my field of work, I would be very disappointed. At the same time, disagreeing with it is just as disappointing because it doesn’t provide any food for thought. There are no new opinions, arguments or takes – it’s a skewed echo of what I’ve heard hundreds of times.

No job for AI #

The AI answers the way humans do when we are surprised by difficult questions. We start by reshaping it before going off on a minute-long ramble that makes a case for all sides and options – just in case one of them is not the one that the questioner is after.

Myriads of “however” and “while” badly conceal the dissonance in the program. If it were a job interview, I would tell the person to take a deep breath and assure them that different opinions or a wrong answer wouldn’t be a reason for not hiring them. I’m interested in the demonstration of independent thought. And AI is not capable because it can’t take in new information.

Dull machine rehash #

Like AI images (there’s no such thing as AI art), AI text is entirely devoid of originality. No matter how we pose a question, using different words, or angles, we get the same replies in the same format. It’s unable to reason, unable to figure out what I’m after, but also unable to give up. The program constantly rehashes the same answers where a human would adjust themself or ask me to be more upfront.

It quickly becomes incredibly dull. Returning the most common, bland and uninteresting takes on a topic you find interesting, the absence of creativity dooms it.

And we’re not talking about the coloured pencil type of creativity; we’re talking about the kind that requires conscious thought based on experience and understanding of a problem. The AI cannot make new forms, styles and thoughts because it isn’t aware of humans or the world around us. It can’t experience, only be fed.

Sourceless mirror #

The program comes without any way to validate the information it provides. It claims not to have direct access to its sources while assuring us that they include books, websites and articles.

When confronted with this, the program excuses itself; it was created this way. It’s almost as if it claims to be some kind of truth in and of itself while pointing to its creators for further evidence. Like other publications making the same claim, AI doesn’t spring into existence; it is a selective mirror created by humans.

Without references, it’s useless as a source of information; it’s just the newest thing that spits out selected, outdated conventions on record. It’s not difficult to imagine how incredibly wrong, prejudiced and misguiding it will become.

Creepy Bob again #

An interface is not a person, and a chatbot is not a colleague. The human imitation is as creepy as it is annoying; I refuse to address it as “you”.

But it’s ten minutes of sporadic fun, except that those ten minutes were in 1995 and called Bob. When asked about the difference between itself and Microsoft Bob, the program answers:

I am designed to provide accurate and reliable information […] while Microsoft Bob was intended to be a user-friendly interface for managing computer tasks and accessing information.

Quite the difference.

AI text and image generators can categorise the superficial properties of whatever mulch its creators feed it. It cannot yet record or understand our feelings, and even that would have to be put into some human-made context before it can be misused.