Halvor William Sanden
Colour modes

Saying goodbye to chatbots

The typical chatbot experience is like a psychic cold-read session with C-3PO. The inaccuracy with which information is given is only surpassed by the frustration and uncertainty about what isn’t there. Is there more I should know? Why can’t I look at the complete documentation? Why do I have to prompt for things I barely know?

What’s making things worse is how we renounce our accountability by talking about them as something responsible for itself. Forcing people to interact with machines the way they would speak with a human is degrading; expecting people to suspend their disbelief and role-play that the machine is like us.

We signal to users how little they are valued and that we cannot be bothered to make good, relevant, available documentation.

The good news is we don’t need to tell a chatbot that we are about to turn it off.

Three bloody problems #

Chatbots require users to describe needs through an inferior interface to an inferior technological solution. It cannot understand where a user is coming from or going, what they want to achieve and why.

1. People don’t know what they need #

People’s needs typically revolve around cases and not singular questions. They need help, not just answers. Even if they think they have a quick question, there can be more to it, things they don’t know that they need to know. And the chatbot doesn’t know either; it has to be told everything. Topical relationships come from statistical probability in the original dataset or many people asking the same thing. We can make explicit rules and connections, but we could just as well make good documentation around the cases and connect them in proper interfaces where even users with atypical cases or questions can get sufficient help.

2. Inefficient interface #

Chatbots mix several interface wrongdoings, things left behind every decade since the 1960s because they were inefficient or off-putting, but resurrected by people with failing memory and something to sell. We’re mainly talking about anthropomorphic traits like a human name, gendering, offputting on-brand greetings, a corporate-style avatar, shotgun suggestions and apologies for failing.

A program shouldn’t apologise; it should log its error so we can fix it.

Most companies don’t try to pass off bots as humans; they try synthesising conversations. We seem to know that having conversations is the best way to help people, but synthesising them has failed for sixty years because it isn’t possible. Without brain and intelligence, there is no thought behind the words that come out. The conversation only exists in the mind of the willing user. They have to believe in it for it to work. In my experience, users aren’t known for their abundance of willingness.

By attempting to mix interfaces and conversations, we end up with neither.

3. Inferior technological solution #

Like many IT products, a chatbot is a technological solution to a non-technological problem. Providing help and documentation requires people who think, possess domain knowledge, recognise potential cases and can combine them through user perspective, something computer programs cannot do.

The impression that a chatbot can be a viable solution is wishful thinking along the lines of wanting the computer to be able to think. It doesn’t understand anything. At best, it calculates a statistical probability based on its limited dataset and approximates an output. Useful for repeatable data and uniform cases but useless for the complexity that is communication.

Understanding required #

Conversations #

Conversation requires understanding, something only humans are capable of for what we’re talking about; support animals do not work in support, as far as I know. Talking to someone experienced is effective for the user because they have someone who can state facts, ask follow-up questions, and provide insight oblivious to the user; someone who knows the case and guides people through it.

A chat interface can even suggest topics in documentation if someone isn’t currently available, but packaging it as a bot and defaulting to restricted help is pointless.

Living documentation based on real life #

People usually have a case and need insight like answers and causation. A case has a story and multiple known and potentially unknown questions. Having full access to well-made documentation while being guided into relevant areas is more valuable than a handful of restricted suggestions from a bot. Documentation doesn’t even have to be a vast separate site; it’s possible to make information available in relevant parts of a service. It might be about finding the sweet spot between not being in the way and not having to be requested.

We need to continuously analyse what people need and adjust accordingly. Computers can help with this but are unable to do it on their own because they cannot deduce people’s needs from questions or statements. They cannot observe, experience and think; they can record and sort.

Temporary gatekeeper #

Some of the chatbots I’ve come across provide an option to speak with a person. In others, the option is only available to people who know how to ask. Some don’t provide that option at all, failing to conclude what their repeated answers mean.

They keep those who want to talk to someone at bay. They do the same to those who need to. What tricks someone into believing they are not using an inefficient search interface prevents others from getting through to people who can help.

Chatbots are better than nothing. Which isn’t a level to which we should aspire. They lead to fewer calls and emails, but don’t tell us much about users’ dismay. The need for help remains.

It’s not as much about getting rid of chatbots as it is about making better solutions. The best thing to do is to keep an eye on the chatbot usage; when it declines, outperformed by documentation and better interfaces, retire them. Otherwise, they will linger on as expensive gatekeepers of something better.