Getting Started with Chatbots and NLP: Part One


Editor’s note: This is the first post in a series that explores the challenges and opportunities involved with chatbots and natural language processing to solve customer service needs.

Chatbots vs. Humans: It Doesn’t Have To Be Binary

According to VentureBeat, more than 30,000 branded chatbots were released in 2016. Many are predicting this trend will only accelerate in 2017. Most of the Fortune 1000 clients we speak to are interested in chatbots but recognize the challenges inherent in the first generation of these technologies. One of the biggests inhibitors to chatbot adoption—particularly to move beyond proofs-of-concept to real production usage—is how to make chatbots intelligent enough to be valuable to customers rather than an innovation novelty. There are two contributing factors to this concern: the difficulty inherent in making chatbots actually intelligent, and the practical reality that some requests must be handled by humans.

Challenges with Chatbot Intelligence

Making a chatbot that responds to perfectly crafted questions that require only one conversational turn—that is, a dialog consisting of a single question resolved with a single answer—is relatively easy. Things become much more complicated when the substance of the questions that can be asked is not predictable, or where the way questions can be asked has significant variability. Unfortunately, this is the reality facing most companies contemplating chatbot interfaces; few domains can be easily condensed to a short-list of predictable questions and answers.

Additionally, the way that most customers expect to interact is not limited to a single conversational turn. In reality, most customers ask questions and reveal context over the course of multiple interactions in a single dialog. For example, which of the following interactions seems more natural?

Customer: What are the current rates for mortgages?

Bot/Agent: Sure, I can help you with that. What’s the property value and how much are you thinking of financing?

Customer: I’m thinking of a house around $240,000 with $60,000 down.

Bot/Agent: Okay, and were you interested in a fixed rate mortgage or adjustable rates?

Customer: What are my adjustable rate options?

Bot/Agent: There is a 5/1 ARM we have right now that is currently at 3.6% with a 30 year term.

Customer: That’s perfect. What would the payments and fees be?

Bot/Agent: There is a $2,265 origination fee and your estimated payment would be $1,095/month.

Customer: Okay, thanks for your help!


Customer: Can you give me a quote for a 30 year 5/1 ARM financing $180,000 with a 75% LTV?

Bot/Agent: Sure, it would be a $2,265 origination fee with an estimated payment of $1,095/month.

Customer: Okay, thanks for your help!

Most customers expect an interaction like the first example, but most bots are being designed with capabilities like the second interaction. The other problem with many chatbot proofs-of-concept is that they have no capability to include a human agent in the process for escalation or to override the bot if it misunderstands the interaction. Fortunately, these problems can be addressed with the right combination of machine learning tools and design that permits a chatbot/human hybrid.

Foundations of a Chatbot/Human Hybrid

Our hybrid solution has some very simple requirements:

  1. When a customer begins an interaction, the chatbot looks at the request from the customer to determine if it a) understands the request and b) has the necessary knowledge to respond.
  2. If the chatbot has a high confidence that it understands the question and has the necessary knowledge, the chatbot responds directly to the customer.
  3. If the chatbot has a low confidence about its understanding or knowledge, it transparently brings a human agent into the discussion. (By “transparently”, I mean that the customer cannot discern that a hand off has occurred.)
  4. If the chatbot has a medium confidence about its understanding or knowledge, it formulates a response that is approved by a human agent prior to sending to the customer.
  5. The chatbot maintains context in between conversational turns and incorporates that context in the responses.

There are a lot of concepts that must be incorporated to meet these requirements. In this blog post, we will start with a basic foundation that we can use to build upon to meet these requirements. Our foundation will begin with something very basic: the ability to connect a chatbot to a human agent which will act as a harness that allows us to incorporate machine learning into dialogs and test it without requiring complex integration.

Architecture and Tools

We decided to build this foundation on Microsoft’s new machine learning cloud ecosystem with a Node.js application and messaging backbone.

  • Microsoft Bot Framework: Provides an SDK for both Node.js and .NET as well as a REST API that enables multiple chat clients (e.g. Skype, Messenger, Slack) to be supported with a single, common interface. The SDK abstracts much of the implementation details of different chat interfaces.
  • Microsoft Bot Emulator: Used in conjunction with the Microsoft Bot Framework, this tool allows us to simulate a chat interface like Facebook Messenger without having to be logged in to Facebook. This simplifies testing and may also be easier for some enterprise environments where access to Facebook can be restricted.
  • Provides server-side npm modules for use in a Node.js application and client-side Javascript libraries for use in a browser that simplifies web socket interactions. We will create a web-based UI for the agent interactions with the bot and customer. The libraries will be the glue that allow for us to join these interactions together.

Our initial architecture provides communication between the chatbot (initiated through a service supported by the Microsoft Bot Framework, such as Facebook Messenger) and a human agent in a web UI. Messages from the customer via the bot will be echoed to a web UI used by the agent, and messages from the agent via the web UI will be echoed through the bot. Our basic architecture will look like this:


The Redis Labs Cloud will be used as a pub/sub and persistence provider for messages. Any redis instance would work for this, but using Redis Labs cloud-hosted service simplifies our implementation.

What’s Next

In my next blog post, I’ll walk through the source code that implements the system described above and set the stage for adding natural language processing to the system. With that foundation, we’ll be prepared to begin building solutions to real-life customer service requests.

Chris Hart

Chris Hart


Chris Hart is a co-founder and CTO of Levvel. He has more than 15 years of technology leadership experience and has led software development, infrastructure, and QA organizations at multiple Fortune 100 companies. In addition to his enterprise experience, Chris has helped start or grow multiple early-stage technology companies. In the five years before starting Levvel, Chris was focused on financial technology solutions in the consumer, commercial and wealth management space. His technical expertise and enterprise-scale, global program management background helps Levvel’s clients transform their businesses.