{"id":646,"date":"2023-07-12T16:48:34","date_gmt":"2023-07-12T06:48:34","guid":{"rendered":"https:\/\/www.samontab.com\/web\/?p=646"},"modified":"2023-07-12T16:48:36","modified_gmt":"2023-07-12T06:48:36","slug":"using-openvino-with-the-opencv-dnn-module","status":"publish","type":"post","link":"https:\/\/www.samontab.com\/web\/2023\/07\/using-openvino-with-the-opencv-dnn-module\/","title":{"rendered":"Using OpenVINO with the OpenCV DNN module"},"content":{"rendered":"\n<p>OpenCV 4.8.0 has been released recently. Also, OpenVINO just released 2023.0.1 last week so it&#8217;s a good time to see how they can be used together to perform inference on a IR optimised model. If you haven&#8217;t installed OpenVINO yet, you can learn how to do it <a href=\"https:\/\/www.samontab.com\/web\/2023\/06\/how-to-build-openvino-2023-0-in-ubuntu-22-04-2-lts-and-run-an-example\/\">here<\/a>. If you haven&#8217;t installed OpenCV, you can follow <a href=\"https:\/\/www.samontab.com\/web\/2023\/02\/installing-opencv-4-7-0-in-ubuntu-22-04-lts\/\">this guide<\/a>.<\/p>\n\n\n\n<p>For this, I&#8217;m going to use a monocular depth estimation model, <a href=\"https:\/\/arxiv.org\/abs\/1907.01341\">MiDaS<\/a>. This model takes as input a color image, and it outputs an inverse depth estimation for every pixel. The closer the object is to the camera, the lighter the pixel, and vice-versa. It looks like this:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img data-dominant-color=\"3d3d40\" data-has-transparency=\"false\" style=\"--dominant-color: #3d3d40;\" loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"640\" src=\"https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2023\/07\/l_d-jpg.webp\" alt=\"\" class=\"wp-image-648 not-transparent\" srcset=\"https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2023\/07\/l_d-jpg.webp 1024w, https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2023\/07\/l_d-300x188.webp 300w, https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2023\/07\/l_d-768x480.webp 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Let&#8217;s grab the original ONNX model and convert it to the Intermediate Representation(IR) to be used with OpenVINO:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nomz_downloader --name midasnet\nomz_converter --name midasnet\n<\/pre><\/div>\n\n\n<p>We can now use OpenVINO from inside OpenCV&#8217;s DNN module:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: cpp; title: ; notranslate\" title=\"\">\n cv::dnn::Net net = cv::dnn::readNetFromModelOptimizer(&quot;..\/public\/midasnet\/FP32\/midasnet.xml&quot;, &quot;..\/public\/midasnet\/FP32\/midasnet.bin&quot;);\nnet.setPreferableBackend(cv::dnn::Backend::DNN_BACKEND_INFERENCE_ENGINE);\n<\/pre><\/div>\n\n\n<p>Then we can proceed exactly as how we would normally do with the OpenCV DNN module:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: cpp; title: ; notranslate\" title=\"\">\ncv::Mat blob = cv::dnn::blobFromImage(originalImage, 1., cv::Size(384, 384));\nnet.setInput(blob);\ncv::Mat output = net.forward();\n<\/pre><\/div>\n\n\n<p>And that&#8217;s pretty much all you need to use OpenVINO from inside OpenCV&#8217;s DNN module. It&#8217;s basically almost the same, you only need to change how to read the model, and set the back-end to use the Inference Engine instead of the default OpenCV DNN one.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenCV 4.8.0 has been released recently. Also, OpenVINO just released 2023.0.1 last week so it&#8217;s a good time to see how they can be used together to perform inference on a IR optimised model. If you haven&#8217;t installed OpenVINO yet, you can learn how to do it here. If you haven&#8217;t installed OpenCV, you can [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[29,21,30,24],"tags":[46,68,25,92],"class_list":["post-646","post","type-post","status-publish","format-standard","hentry","category-computer-vision","category-open-source","category-opencv","category-openvino","tag-depth","tag-open-source","tag-openvino","tag-performance"],"_links":{"self":[{"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/posts\/646","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/comments?post=646"}],"version-history":[{"count":0,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/posts\/646\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/media?parent=646"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/categories?post=646"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/tags?post=646"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}