The reality of automating customer service chat with AI today – VentureBeat

Of all the fields in the chatbot-crazed world, customer service is one of the prime targeting areas for automation. Virtual Customer Agents (customer service focused bots or VCAs for short) are intelligent systems that are able to understand what users ask via chat and provide them with adequate answers to solve users issues. In the context of this article when we talk about VCAs we mean systems that are able to understand natural language and texting and do not just operate in a rule based multiple-choice environment. In short, these VCAs compete directly with humans to resolve customer service issues.

The current reality of chatbots nicely counterbalances all the hype that AI is getting and also offers guidance as to where things need to develop. Having deployed VCAs that autonomously answer questions having attended major customer service automation and chatbot summits, here are the key learnings that form the basis of any VCA development today:

Ideally, you can train the VCA with thousands of questions (complete with misspellings, grammatical errors and pidgin dialects) from actual users of the product/service. The reality is that most companies do not have existing chat history data readily available for training. In that case the options are to either to artificially generate thousands of different questions or to deal with the reality of not having too much input data and hope to gather it when the VCA goes live. Neither solution is ideal and even if companies have a chat log history, then it is unlabeled. This means the questions in the chat logs are not paired to intents. Fully manual pairing of thousands of questions to intents is time consuming. A solution around this that we have developed are semi-autonomous question-intent pairing tools which decrease considerably the human effort needed in labeling data. Such an approach makes working with the customer data more efficient and reduces the labeling bottleneck.

With all the advances in machine and deep learning, most algorithms largely remain pattern based approaches to extract intent from a large corpus of previously seen chat history. Users questions to banks differ from questions asked from telecom companies and there is no off-the-shelf algorithm to fit both cases. An optimal solution weve found is to use a host of different algorithms (SVMs (support vector machines), Naive Bayes, LSTMs (long short-term memory), and feedforward neural networks) to match user questions to specific intents. An ensemble of predictors yields a confidence score for each intent and we simply take the best match. Such an approach provides more accurate answers to users.

Extraction of meaning or more specifically, semantic relations between words in free text is a complex task. The complexity is mostly due to the rich web of relations between the conceptual entities the words represent.

For example, a simple sentence as my older brother rides the bike contains a lot of semantic richness as the hidden baggage is not evident from the tokenized surface representation (e.g. my brother is a human, the bike is not a living entity, me and my brother have the same mother/father, I am younger than my brother, and the bike cannot ride my brother).

Shared collectively, this knowledge makes it possible to communicate with others. Without it, there is no consistent interpretation and no mutual understanding. When reading a piece of text, youre not just looking at the symbols but actually mapping them to your own conceptual representation of the world. It is this mapping that makes the text meaningful. A sentence will be considered nonsensical if mismatches are found during the mapping.

Since the computers manufactured today do not include a model of the world as part of the operating system they are also largely clueless when fed with unstructured data such as free text. The way a computer sees it, a sentence is just a sequence of symbols with no apparent relations other than ordering in the sentence. As the problems related to financial services can be rather specific, you have to augment the typical pipeline of NLP and machine learning with semantic enrichment of inputs. You must devise semantic ontologies that are helpful for the identification of users problems in the financial and telecom sectors. The underlying idea of semantic ontologies is to encode commonalities between concepts (e.g. cats and dogs are both pets) as additional information yielding a denser representation of tokens. Another step forward is an architecture capable of semantic tagging of both known and unknown tokens based on the context.

VCAs must understand but the bulk of cases where users ask a question in natural language. The VCA should be able to understand the problem and actually help the user resolve the problem without involving human support. For narrow and only rule-based VCAs, the resolve rates can be higher but in our experience people are impatient when dealing with customer service. Instead of reading instant articles and suggested topics, they wish to express their problems as specific questions and expect a relevant answer. Understanding free text is a tough problem and current autonomous resolve rates that hover around 10-20% reflect that. Even so, when considering that larger companies need hundreds of people to solve highly repetitive issues for their customers, automating that percentage can save a lot of working hours and allow humans to focus on the more creative and demanding aspects of their work.

Indrek Vainu is the CEO and co-founder of AlphaBlues, a company automating enterprise customer service chat with artificial intelligence.

Above: The Machine Intelligence Landscape This article is part of our Artificial Intelligence series. You can download a high-resolution version of the landscape featuring 288 companies by clicking the image.

Continue reading here:

The reality of automating customer service chat with AI today - VentureBeat

Related Posts

Comments are closed.