Chatbots Frustration: Why Do Humans Have No Sense?

A chatbot should imitate a human conversation to the best extent, and understand things that were not said to it directly

It is important for an online conversation robot to understand the user’s intention for it to function properly. Not in its literal meaning – “intent” which is a common term that’s used within platforms such as API.AI or WIT.AI.

This article is an attempt to understand the user’s true intention. The meaning behind their words.

To do that, I will try to create an improvised integration between pragmatics and logics – this is going to be conducted on a rooky level, yet, it’s easier said than done. Why? Since we do not express ourselves logically on a daily level, also – we do not always put into words what we think. On top of that, we change our decisions quite often. However, if we make peace with the fact that we are not trying to develop an online conversation robot with abilities similar to the terminator (Arnold Schwarzenegger), but one that is efficient for customer service and product service, then we can actually achieve something. The current technology is capable to do, given that it is used correctly.

Our realistic chatbot in this post will operate in the tourism industry and with its help we can book hotel rooms. For those users who are not accustomed to use of logical conclusions, let’s try a few mind exercises – drawing logical conclusions by using a list of arguments:

Example – Coke 

There are two arguments:

  1. Dani likes all the sweet drinks.
  2. Coke is a sweet drink.

Conclusion – Dani likes Coke.

Is it possible that Dani does not like Coke? No! If both arguments are correct, the possibility that Dani doesn’t like coke doesn’t exist.

Example – Sea view 

There are two arguments:

  1. Dani likes to stay in rooms with a sea view.
  2. In the Hilton Tel Aviv, all the rooms have a sea view.

Conclusion – Dani likes staying at the Hilton Tel Aviv.

Are we sure about that? According to logic, yes, but as humans – we know it’s not certain. In real life, drawing conclusions is not so easy, as we must consider several possibilities at the same time, and usually these are not definite arguments. In addition, we are surrounded by contradictory arguments.

Example – Hotel manager

  1. Joshua, the Hilton manager, is always attentive towards his employees.
  2. Oren works at the Hilton.

Conclusion – Joshua is always attentive towards Oren.

Is that so? Well, not always… what if Joshua is busy now? Clearly, he will not be attentive towards Oren while he’s busy with something else. Let’s include the additional argument.

  1. Joshua, the Hilton manager, is always attentive towards his employees, Providing he is not busy.
  2. Oren works at the Hilton
  3. Joshua is busy today, all-day long.

Conclusion – Joshua is not attentive towards Oren today.

Is that so? Well, it is not certain in this case as well…at times the arguments are not sufficient and may contradict one another. Perhaps what Oren wants to discuss with Joshua can’t wait, something that’s of major help to Joshua? Or perhaps Joshua is extremely irritable today and Oren has positive and encouraging words for him? So, is it possible that since Joshua is irritable – he might agree to take a short break to listen to Oren’s positive words?

When we design a bot we must be aware of logical contradictions, vague arguments, or situations that are extremely complex. On the other hand, actual discourse scenarios with bots do include complex situations and logical contradictions. The way of dealing with this matter is to dismantle each complex scenario into a breakdown of many simple scenarios, one step at a time, and set rules for each one. All the bots I’ve encountered fail when it comes to understanding that in real life the right conclusions are not drawn easily, and they call for a creative thinking.

Example – booking a hotel

Let’s suppose we are working on an online conversation robot, through which it is possible to book hotels, here is our argument:

  1. People who book a room in a hotel, are interested in a room with breakfast included.

Now I will add an argument which is contradictory to the above mentioned argument.

  1. There are some people who are not interested in breakfast and prefer to prepare it themselves.

The contradiction exists, as argument A does not even recognise the possibility that someone isn’t interested in breakfast. Let’s fix that:

  1. Most people who book a hotel room are also interested in breakfast.
  2. Those who are not interested in breakfast, prefer to prepare it on their own.

These arguments aren’t necessarily exact, as reality is much more complex. Every person that works in hotel reservations, will be able to provide accurate statistics on this topic. But to work from the basics let’s assume that these arguments are accurate and logical.

One immediate conclusion is possible – those who are not interested in breakfast, might be interested in a self-catering apartment with a fully equipped kitchen. Considering that, a rule can be determined:

In cases that the customer requests a hotel with no breakfast, the bot should also offer self-catering rooms/apartments with a fully equipped kitchen.

Recently I have had the pleasure of working with a travel agency which has helped me learn some new things during the research I conducted with sales representatives.

Those who book a hotel for only one or two nights, are not interested in preparing their own breakfast, even if they book a hotel room without breakfast.

So, we need to insert an additional element to our rule –

In instances where the customer books a room with no breakfast but the booking is for three days and more, then the bot should also offer self-catering rooms or apartments with a fully equipped kitchen.

Another piece of information I received from the personal experience of a representative is that quite often, the rooms booked without breakfast are simply a way to avoid the extra expense. In this case they do not prepare a meal themselves, but simply eat at a cheaper restaurant.

We can make a linkage between the fact that the customer books a room with no breakfast and reach the conclusion that he or she wishes to save money. Therefore, we should offer that customer cheaper options that will suit his or her needs better.

Let summarise the arguments, which of course are not absolute, but highly likely:

  1. Most people who book a hotel room are interested in breakfast.
  2. Those who are not interested in breakfast, do that to save money.
  3. Those who are not interested in breakfast, and book a room for three nights or longer, prefer to prepare breakfast themselves.
  4. In a self-catering room/apartment with fully equipped kitchen it is possible to make breakfast independently.

Arguments B and C do not contradict one another, but constitute two possible scenarios for the same situation with a distribution that can be analysed and characterised. Here is an idea for revised rules:

  1. For people who book a hotel for two nights or less, and are not interested in breakfast – the bot will offer a variety of cheap hotels.
  2. For people who book a hotel for two nights or more, and are not interested in breakfast – the bot will offer rooms with equipped kitchen.

Is this conclusion always correct? Of course not. It is true to a certain extent? According to the travel rep who deals with customers daily, it is.

The calculation of the array of hotels that the bot should offer a customer, requires a detailed research with the travel reps and sales agents. By using the personnel expertise, along with a correct and mainly creative analysis of the information, it is possible to formulate many arguments and produce effective conclusions.

The more possible scenarios the bot contains, the more accurate it would become. It would be less prone to glitches, and more likely please the customer. Please notice that I haven’t talked about the factor of verbal understanding. That requires a separate article.

We have reached to reasonable operative conclusions in this article even without using complex rules of logic. The reader might ask – hold on, according to the drawn conclusions the bot will offer a limited number of choices which may exclude all the other options the agency has to offer. 

My answer is: the realistic chatbot is in its essence, plays a different game, the “personal agent” game. The personal agent should pinpoint some possibilities for the customers, but in a way that will suit their needs. If someone prefers various options, they can use websites such as The bot is not supposed to, nor would be able to compete with that kind of interface.

To summarise I will use the same logic to describe an excellent online conversation robot:

  1. An excellent bot is a realistic chatbot, that can imitate a human conversation to the best extent.
  2. A human is capable of understating things that were not directly said.

Conclusion – an effective chatbot will also be able to understand things that were not said to it directly.

Your email address will not be published. Required fields are marked *