Skip to content


Enhancing Customer Service with Real-Time Sentiment Analysis: Leveraging LLMs and OpenVINO for Instant Emotional Insights

Large language models (LLMs), with their sophisticated natural language understanding capabilities, have significantly enhanced the functionality of chatbots, enabling them to conduct more meaningful and contextually relevant conversations with customers. These advanced chatbots can manage a wide range of customer service tasks, from answering FAQs to providing personalised recommendations, thus improving efficiency and scalability in customer service operations.

However, despite the advancements, a critical component of human interaction often missing in digital customer service is the nuanced understanding of the customer’s emotional state. Real-time sentiment analysis seeks to bridge this gap. By detecting subtle cues in text that indicate a customer’s mood or emotional state, businesses can provide more empathetic and tailored responses. This not only helps in resolving issues more effectively but also aids in building stronger customer relationships.

The integration of LLMs with sentiment analysis models, further optimised by OpenVINO, addresses this need. OpenVINO toolkit accelerates the inference process of AI models, allowing the sentiment analysis to occur instantaneously as the conversation unfolds. This enables a company to automate nuanced responses or escalate issues to live agents, depending on the emotional tone detected in customer communications. This not only streamlines the response process but also ensures that customers with pressing or sensitive issues receive the human attention they require.

Here is an example of the chatbot with a positive sentiment, that is, the user sentiment is close to 1.0:

On the other hand, the following is an example of a negative sentiment, that is, the user sentiment is close to -1.0:

You can see how a company can use these opportunities to engage with their customer exactly at these times, when the relationship is being put to the test. They could for example connect the user with a human operator just when the sentiment turned negative instead of simply leaving the chatbot continue interacting with them.

In this post I will show you how you can create such an application using a Large Language Model, and real time sentiment analysis, both models using OpenVINO for fast performance.

The first thing you need to do is to make sure you have your system up to date, and some basic libraries installed:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python3-venv build-essential python3-dev git-all

We need to get the sample source code, setup a Python environment for this project, and install the requirements:

git clone https://github.com/samontab/llm_sentiment.git
cd llm_sentiment
python3 -m venv llm_sentiment
source llm_sentiment/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt

We are now ready to have a look at the code. Start a jupyter lab with the llm_sentiment notebook, which is available at the current folder. You can do that by simply executing this:

jupyter lab llm_sentiment.ipynb

Now you should be able to see the code in your browser. If you click on the fast forward button on the top right, you will be able to execute all the code.

After clicking that button, there will be a popup asking you to restart the kernel. Click on Restart and wait for a few minutes.

You will be able to access the chat interface at the end of the notebook either directly there, or in another tab by clicking on the link after this text:

Now you can test it for yourself.

To get more details about the code, make sure to checkout the llm_sentiment notebook included in the repository:

You can have a look at the code at https://github.com/samontab/llm_sentiment

Posted in OpenVINO.

Tagged with , , , .


Using OpenVINO with the OpenCV DNN module

OpenCV 4.8.0 has been released recently. Also, OpenVINO just released 2023.0.1 last week so it’s a good time to see how they can be used together to perform inference on a IR optimised model. If you haven’t installed OpenVINO yet, you can learn how to do it here. If you haven’t installed OpenCV, you can follow this guide.

For this, I’m going to use a monocular depth estimation model, MiDaS. This model takes as input a color image, and it outputs an inverse depth estimation for every pixel. The closer the object is to the camera, the lighter the pixel, and vice-versa. It looks like this:

Let’s grab the original ONNX model and convert it to the Intermediate Representation(IR) to be used with OpenVINO:

omz_downloader --name midasnet
omz_converter --name midasnet

We can now use OpenVINO from inside OpenCV’s DNN module:

 cv::dnn::Net net = cv::dnn::readNetFromModelOptimizer("../public/midasnet/FP32/midasnet.xml", "../public/midasnet/FP32/midasnet.bin");
net.setPreferableBackend(cv::dnn::Backend::DNN_BACKEND_INFERENCE_ENGINE);

Then we can proceed exactly as how we would normally do with the OpenCV DNN module:

cv::Mat blob = cv::dnn::blobFromImage(originalImage, 1., cv::Size(384, 384));
net.setInput(blob);
cv::Mat output = net.forward();

And that’s pretty much all you need to use OpenVINO from inside OpenCV’s DNN module. It’s basically almost the same, you only need to change how to read the model, and set the back-end to use the Inference Engine instead of the default OpenCV DNN one.

Posted in Computer Vision, Open Source, OpenCV, OpenVINO.

Tagged with , , , .