NLP Cloud is an API that makes it easy to use Natural Language Processing in production. The API is based on the best open-source pre-trained models. You can also use your own models or train models on the platform. NLPCloud.io most of the text understanding and text generation features: entities extraction (NER), sentiment analysis, text classification, text summarization, question answering, text generation, and Part-of-speech (POS) tagging... and more!
The API is available for free up to 3 requests per minute, which is a good way to easily test the quality of the models. Then the first paid plans costs $29 per month (for 15 requests per minute).
Let's see how to use the API in this tutorial.
Deploying AI models to production is a frequent source of project failure. Natural Language Processing models are very resource intensive, and ensuring high availability of these models in production, while having good response times, is a challenge. It takes and expensive infrastructure and advanced DevOps, programming, and AI skills.
NLP Cloud's goal is to help companies quickly leverage their models in production, without any compromise on quality, and at affordable prices.
Sign up is very quick. Just visit the registration page and fill your email + password (register here).
You are now in your dashboard and you can see your API token. Keep this token safely, you will need it for all the API calls you will make.
Several code snippets are provided in your dashboard in order for you to quickly get up to speed. For more details, you can then read the documentation (see the documentation here).
NLP Cloud provides you, out-of-the-box, with most of the typical Natural Language Processing features, either thanks to pre-trained spaCy or Hugging Face models, or by uploading your own spaCy models.
In order to make the API easy to use, NLP Cloud provides you with client libraries in several languages (Python, Ruby, PHP, Go, Node.js). In the rest of this tutorial, we are going to use the Python lib.
Use PIP in order to install the Python lib:
pip install nlpcloud
Entities extraction is done via spaCy. All the spaCy "large" pre-trained models are available, which means that 15 languages are available (more details on all these models on the spaCy website). You can also upload custom in-house spaCy models that you developed by yourself in order to use them in production. If that's what you want, just go to the "Custom Models" section in your dashboard:
Now let's imagine that you want to extract entities from the sentence "John Doe has been working for Microsoft in Seattle since 1999." thanks to the pre-trained spaCy model for English ("en_core_web_lg"). Here's how you should proceed:
import nlpcloud client = nlpcloud.Client("en_core_web_lg", "
") client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
It will return the content of each extracted entity and its position in the sentence.
Sentiment analysis is achieved thanks to Hugging Face transformers and Distilbert Base Uncased Finetuned SST 2 English. Here's an example:
import nlpcloud client = nlpcloud.Client("distilbert-base-uncased-finetuned-sst-2-english", "
") client.sentiment("NLP Cloud proposes an amazing service!")
It will tell you whether the general sentiment in this text is rather positive or negative, and its likelihood.
Text classification is achieved thanks to Hugging Face transformers and Facebook's Bart Large MNLI. Here is an example:
import nlpcloud client = nlpcloud.Client("bart-large-mnli", "
") client.classification("""John Doe is a Go Developer at Google. He has been working there for 10 years and has been awarded employee of the year.""", ["job", "nature", "space"], True)
As you can see, we are passing a block of text we are trying to classify, along with possible categories. The last argument is a boolean that defines whether one single category or several ones can apply.
It will return the likelihood for each category.
Text summarization is achieved thanks to Hugging Face transformers and Facebook's Bart Large CNN. Here's an example:
import nlpcloud client = nlpcloud.Client("bart-large-cnn", "
") client.summarization("""The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.""")
It will return a summarization of the above. This is an "abstractive" summary, and not an "extractive" one, which means new sentences might be generated, and non-important ones are removed. However non-essential sentences are removed of course.
Question answering is achieved thanks to Hugging Face transformers and Deepset's Roberta Base Squad 2. Here's an example:
import nlpcloud client = nlpcloud.Client("roberta-base-squad2", "
") client.question("""French president Emmanuel Macron said the country was at war with an invisible, elusive enemy, and the measures were unprecedented, but circumstances demanded them.""", "Who is the French president?")
Here it's about answering a question thanks to a context.
For example the above example will return "Emmanuel Macron".
Part-Of-Speech tagging is achieved thanks to the same spaCy models as the one used for entities extraction. So for example if you want to use the English pre-trained model, here's how you should do:
import nlpcloud client = nlpcloud.Client("en_core_web_lg", "
") client.dependencies("John Doe is a Go Developer at Google")
It will return the part-of-speech of each token in the sentence, and its dependency on other tokens.
NLP Cloud is an API for Natural Language Processing that is easy to use and that helps you save a lot of time in production.
More models are available, like translation, language detection, text generation... And much more.
Also note that, for critical performance needs, GPU plans are also proposed.
I hope this article was useful to some of you! If you have any question, please don't hesitate to let me know.
CTO at NLPCloud.io