Generative Data Intelligence

Entending Your Watson Chatbot

Date:


Image via https://unsplash.com/@brett_jordan

How to extend the reach of your Watson based chatbot

Daniel Toczala

It seems like everyone is building chatbots today. The idea of an intelligent assistant that can answer customer questions, for a fraction of the cost of a populated help desk or support team, resonates in many different industries. A recent study by Forrester Consulting indicated that companies saw a benefit of over $5 per contained conversation handled by Watson Assistant. This study found that the majority of use-cases addressed by Watson Assistant based chatbots fell into three basic categories:

  1. The starting point for many customers seemed to be customer self-service, providing AI-powered automated assistance to customers through web, mobile and/or voice channels.
  2. A second starting point was employee self-service, essentially bringing those capabilities in-house, to support employees 24/7 by answering their HR/IT questions more quickly and minimizing time away from their priority work.
  3. The last use case was agent assist, enabling human agents to better handle customer inquiries by helping them find answers to complex questions.

The first two cases are classic deployments of Watson Assistant technology, they represent AI-infused assistants that answer frequently asked questions. These are handled quite easily and can be deployed fairly rapidly using the existing Watson Assistant capabilities. Getting to this first level is good. But what about going to the next level — getting to that agent assist use case and those more complex questions?

The third set of agent assist use cases require answering more complex questions. This is what some people might refer to as a “long-tail chatbot”, and long-tail chatbots in the past would require some sort of collaboration app, which would pass along these long-tail questions to an instance of the Watson Discovery service.

While this worked, it did require some software development skills and the coordination of services between two different Watson services (Watson Assistant and Watson Discovery). That made it difficult for some of our smaller customers, or our customers who had smaller development teams. The same could be said for customers who had outsourced the development of their chatbot.

Answering these tougher questions is where some of the major portions of those economic benefits (remember that $5 per conversation?) can be realized. It can also help provide much higher levels of customer satisfaction and customer engagement. Imagine being able to guide your customers in finding their own answers to problems that you have in your internal knowledge base? Wouldn’t this help speed resolution of customer issues, and improve customer satisfaction?

Now with Watson Assistant Plus, you can do this integration of Watson Assistant and Watson Discovery without writing any code. It works well, and the improved results can be realized quickly. The key concept to understand is the idea of a Search Skill. A search skill is used to route complex customer inquiries to the IBM Watson Discovery service — so you can think of it as the skill required to search a collection of documents for an answer. This is in contrast to the Dialog Skill — which is the structure of a dialog that is used to determine user intent and answer short tail (FAQ) types of questions.

Connecting to the Discovery service is only part of the solution. You still need to configure and build a collection of documents that will be used for the search skill in the Watson Discovery service. You will want to create a new collection in Watson Discovery and ingest some documents that are relevant to providing the answers that you are interested in. You need to think about what kind of information you have available to you, and where that information resides. Can you get access to it to easily ingest it? Can you keep things up-to-date, and continue to ingest new data?

So knowing about the availability of this easy integration is nice, but what exactly does it look like? if you are strictly a business user with no interest in how this works, then you can stop reading here. This is where all of the boring technical details begin…..

But if you are a technical user, who wants a brief walk-through of how to configure this capability, then please read on. I started with a Watson Assistant Plus instance (but it works with Premium instances too), and had a simple chatbot that was answering a lot of different short tail questions. We had a fairly complex dialog skill, with a couple of dozen different intents, and a few dozen different entities.

When you go in and look at your Watson Assistant instance, it will look something like this:

Starting page for Watson Assistant Plus

You will have the ability to create a new Assistant by pressing the “Create Assistant” button, and then naming your new assistant. You’ll want to include some short description of what your assistant does.

Create Assistant screen

At this point, you are now ready to begin extending your original chatbot. You see a new screen that allows you to choose a Dialog Skill, as well as a Search Skill. For the Dialog skill, you will choose the same dialog skill that you had been using in the past. If you click to go into the skill, you will now notice some new capabilities (like Disambiguation) that are available to you. You can find out more about these additional capabilities, and the pricing implications, by checking out the Watson Assistant pricing plans.

Adding your Dialog and Search skills

The thing that is going to differentiate our chatbot, and make it “smarter”, is the inclusion of the Watson Discovery integration to address those “long tail” questions. So now let’s go and add a Search Skill. Click on the button to add a search skill, and you will go to a dialog where you can now choose between an existing search skill (if you had already created one), or a new search skill. We’ll create a new skill, and name it appropriately.

We then hit the Continue button, and now we’ll need to select a Discovery instance (you should have a good naming convention for your services — at times like this, the reasons become obvious), and a collection within that Discovery instance. Now I had already created a collection in my Discovery instance, so I could just select the right instance and collection, and go ahead and configure my chatbot. If you need to, you might need to pause here and go and create a new Discovery instance, as well as a collection to hold your knowledge base. Make sure that you create the right size Discovery collection (see my article on Watson Discovery at the Size You Want). We don’t want to have unexpected expenses.

So once you have created your Discovery instance and collection, and have selected it as your Seach skill, you will now be presented with the final configuration screen for your search skill. This is where your work with the Discovery service will become important. All of the data enrichments and metadata that was collected when you ingested documents into your Discovery collection will now be available to help you configure your integration.

Search skill configuration

Let’s look at the things that you can configure here, and discuss the impact that they have on your chatbot. A big driver of what you see here are the enrichments that you have selected to make as you ingest documents into your Discovery collection. These enrichments are applied as your content is ingested, and will provide some more context for the data.

In my case, I added categories, concepts, keywords and entities as enrichments to look for in the content that I was ingesting. I also wanted to provide a URL to the original article for my chatbot users, so I had to ended up using a small utility (more on that in the future — it was a nice little routine written by a fellow CSM) that also set the URL of the source article when ingesting the article.

So the first field to set up is the title. For this, I selected the title of the ingested article. That is the title that my users will see returned as a potential answer for their “long tail” query. The second field is the body of your response. Since our chatbot was a Slack-bot, we opted to keep this blank, since our answers were beginning to get too long for the typical Slack UI. We then chose the URL of the original article to return as a URL. This meant that any question that gets routed to our search skill will return the three highest-ranked article titles (with links) that relate to a user question.

The three configuration settings on the bottom are important as well. These are the responses that are returned for a variety of conditions. The Message field is the conversational text that is returned when you find relevant information. The No Results Found field is the conversational text returned where relevant answers cannot be found in your Discovery collection. Finally, the Connectivity Issue field is the conversational text that is returned when the service cannot be reached. It is important to have different messages for each of these, so you are able to quickly determine how the Discovery collection is responding to inputs from your chatbot.

So now you have this configured — how does it work? The summary in the online documentation provides some overview of how this works, but you can change some of the behavior by changing things in your Dialog Skill. If you look at your Dialog Skill, you will notice that the response in the “Anything Else” cell is now “Search Skill”. You have the ability to reach out and use your search skill (and even modify the query and filter used to extract information from the Discovery collection) from anywhere in your dialog.

I strongly suggest that you use the default “Anything Else” behavior at first, and play with things a bit to see how your search skill reacts and provides answers to user questions.

Chatbots can have a huge financial impact on an organization, and one of the more typical use cases (the agent assist use case) is much more impactful and effective when the chatbot has its capabilities expanded to be able to handle “long tail” questions. Watson Assistant now provides an easy, “no-code”, integration to the Watson discovery service, to allow an organization to easily extend the capabilities of their existing Watson Assistant driven chatbots.

Source: https://chatbotslife.com/entending-your-watson-chatbot-fbd3291e0dd2?source=rss—-a49517e4c30b—4

spot_img

Latest Intelligence

spot_img