Conversational AI is a central sub-field of Natural Language Processing that makes it possible for a human to have a conversation with a machine. Everytime the human says or asks something to the AI, the whole conversation history is sent too, so the AI can have the context in memory and make relevant responses. Modern chabots leverage conversational AI and can do more than simply having a conversation. For example they can detect customer intents, search documents, understand the customer tone and adapt their own tone (anger, joy, sarcasm...).
Until recently, chatbots have been very limited. But with the creation of huge modern models like GPT-3 and GPT-J, it is now possible to easily create advanced chatbots that are both fluent and relevant.
GPT-J is the most advanced open-source NLP model as of this writing, and this is the best GPT-3 alternative. This model is so big that it can adapt to many situations, and perfectly sounds like a human. For advanced use cases, it is possible to fine-tune GPT-J (train it with your own data), which is a great way to get a chatbot that is perfectly tailored to your company/product/industry.
More and more companies want to leverage chatbots, either to build an advanced product based on AI, or improve their internal productivity. Here are a couple of examples:
The most popular chatbot application is to automatically help customers without having to rely on a support person. It dramatically improves reactivity, and it alleviates the support team so they can focus on very advanced questions only. A good support chatbot is able to search documents for customers, answer contract or technical questions, detect customer tone and intent...
Some video games now include conversational AI capabilities, so players can naturally discuss with the machine. It makes modern games much more interactive, especially because modern conversational AIs can adapt their tone to the situation (anger, joy, sarcasm...).
It's sometimes hard for a user to find what he's looking for, especially if there're a lot of products or if the products are complex. In that case, building a chatbot to help customers and point them to the right product is a very good solution.
The healthcare industry leverages chatbots in order to discuss with patients and automatically make a diagnostic.
In order to make the most of GPT-J, it is crucial to have in mind the so-called few-shot learning technique: by giving only a couple of examples to the AI, it is possible to dramatically improve the relevancy of the results, without even training a dedicated AI.
Sometimes, few-shot learning is not enough (for example if your chatbot relies on very specific content, bound to your company only). In that case, the best solution is to fine-tune (train) GPT-J with your own data.
Building an inference API for conversational AI based on GPT-J is a necessary step as soon a you want to use a chatbot in production. But building such an API is hard... First because you need to code the API (easy part) but also because you need to build a highly available, fast, and scalable infrastructure to serve your models behind the hood (hardest part). It is especially hard for machine learning models as they consume a lot of resources (memory, disk space, CPU, GPU...).
NLP Cloud proposes a chatbot and conversational AI API based on GPT-J that gives you the opportunity to perform conversational AI out of the box, with breathtaking results. If the base GPT-J model is not enough, you can also fine-tune/train GPT-J on NLP Cloud and automatically deploy the new model to production with only one click.