{"id":375,"date":"2015-11-05T08:56:14","date_gmt":"2015-11-04T21:56:14","guid":{"rendered":"http:\/\/www.samontab.com\/web\/?p=375"},"modified":"2021-03-03T09:34:15","modified_gmt":"2021-03-02T22:34:15","slug":"interfacing-intel-realsense-3d-camera-with-google-camera-lens-blur","status":"publish","type":"post","link":"https:\/\/www.samontab.com\/web\/2015\/11\/interfacing-intel-realsense-3d-camera-with-google-camera-lens-blur\/","title":{"rendered":"Interfacing Intel RealSense 3D camera with Google Camera&#8217;s Lens Blur"},"content":{"rendered":"<p>There is an interesting mode in the <a href=\"https:\/\/play.google.com\/store\/apps\/details?id=com.google.android.GoogleCamera&amp;hl=en\" target=\"_blank\" rel=\"noopener\">Google Camera app<\/a>, called <strong>Lens Blur<\/strong>, that allows you to refocus a picture after it was taken, basically simulating what a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Light-field_camera\" target=\"_blank\" rel=\"noopener\">light-field camera<\/a> is able to do but using just a regular smartphone.<\/p>\n<p>To do this, the app uses an array of computer vision techniques such as Structure-from-Motion(<a href=\"https:\/\/en.wikipedia.org\/wiki\/Structure_from_motion\" target=\"_blank\" rel=\"noopener\">SfM<\/a>), and Multi-View-Stereo(<a href=\"https:\/\/en.wikipedia.org\/wiki\/3D_reconstruction_from_multiple_images\" target=\"_blank\" rel=\"noopener\">MVS<\/a>) to create a depth map of the scene. Having this entire pipeline running on a smartphone is remarkable to say the least. You can read more about how this is done <a href=\"http:\/\/googleresearch.blogspot.com.au\/2014\/04\/lens-blur-in-new-google-camera-app.html\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n<p>Once the depth map and the photo are acquired, the user can select the focus location, and the desired depth of field. A <a href=\"https:\/\/en.wikipedia.org\/wiki\/Focal_length#Thin_lens_approximation\" target=\"_blank\" rel=\"noopener\">thin lens<\/a> model is then used to simulate a real lens matching the desired focus location and depth of field, generating a new image.<\/p>\n<p>In theory, any photograph with its corresponding depth map could be used with this technique. To test this, I decided to use an Intel RealSense F200 camera, and it worked. Here are the results:<br \/>\n<br \/>\nFocused on the front:<br \/>\n<a href=\"http:\/\/www.samontab.com\/web\/2015\/11\/interfacing-intel-realsense-3d-camera-with-google-camera-lens-blur\/frontinfocus\/\" rel=\"attachment wp-att-376\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-376 size-medium\" src=\"http:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/frontInFocus-300x225.png\" alt=\"frontInFocus\" width=\"300\" height=\"225\" srcset=\"https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/frontInFocus-300x225.png 300w, https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/frontInFocus.png 640w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>\nFocused on the back:<br \/>\n<a href=\"http:\/\/www.samontab.com\/web\/2015\/11\/interfacing-intel-realsense-3d-camera-with-google-camera-lens-blur\/backinfocus\/\" rel=\"attachment wp-att-377\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-377 size-medium\" src=\"http:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/backInFocus-300x225.png\" alt=\"backInFocus\" width=\"300\" height=\"225\" srcset=\"https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/backInFocus-300x225.png 300w, https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/backInFocus.png 640w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>Those two images were created on the smartphone using the Lens Blur feature of the Google Camera app. The input image was created by me externally, but the app was happy to process it since I used the same encoding that the app uses.<\/p>\n<p>To do that, I first took a color and depth image from the Realsense camera. Then, projected the depth image into the color camera:<br \/>\n<br \/>\nColor photograph from RealSense F200:<br \/>\n<a href=\"http:\/\/www.samontab.com\/web\/2015\/11\/interfacing-intel-realsense-3d-camera-with-google-camera-lens-blur\/realsense_color\/\" rel=\"attachment wp-att-378\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/realsense_color-300x225.png\" alt=\"realsense_color\" width=\"300\" height=\"225\" class=\"alignnone size-medium wp-image-378\" srcset=\"https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/realsense_color-300x225.png 300w, https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/realsense_color.png 640w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><br \/>\n<br \/>\nProjected depth image from RealSense F200:<br \/>\n<a href=\"http:\/\/www.samontab.com\/web\/2015\/11\/interfacing-intel-realsense-3d-camera-with-google-camera-lens-blur\/realsense_depth\/\" rel=\"attachment wp-att-379\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/realsense_depth-300x225.png\" alt=\"realsense_depth\" width=\"300\" height=\"225\" class=\"alignnone size-medium wp-image-379\" srcset=\"https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/realsense_depth-300x225.png 300w, https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/realsense_depth.png 640w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><br \/>\n<br \/>\nThe next step is to encode the depth image into a format that Google Camera understands, so I followed the encoding instructions from the <a href=\"https:\/\/developers.google.com\/depthmap-metadata\/encoding\" target=\"_blank\" rel=\"noopener\">documentation<\/a>. The RangeLinear encoding of the previous depth map looks something like this:<br \/>\n<a href=\"http:\/\/www.samontab.com\/web\/2015\/11\/interfacing-intel-realsense-3d-camera-with-google-camera-lens-blur\/realsende_depth_linear\/\" rel=\"attachment wp-att-380\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/realsende_depth_linear-300x225.png\" alt=\"realsende_depth_linear\" width=\"300\" height=\"225\" class=\"alignnone size-medium wp-image-380\" srcset=\"https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/realsende_depth_linear-300x225.png 300w, https:\/\/www.samontab.com\/web\/wp-content\/uploads\/2015\/11\/realsende_depth_linear.png 640w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>The next step is just to embed the encoded image into the metadata of the original color image, and copy the final photo into the smartphone gallery. After that, you can just open the app, select the image, and refocus it!.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There is an interesting mode in the Google Camera app, called Lens Blur, that allows you to refocus a picture after it was taken, basically simulating what a light-field camera is able to do but using just a regular smartphone. To do this, the app uses an array of computer vision techniques such as Structure-from-Motion(SfM), [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[29,22,21,6,4],"tags":[],"class_list":["post-375","post","type-post","status-publish","format-standard","hentry","category-computer-vision","category-iot","category-open-source","category-photography","category-programming"],"_links":{"self":[{"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/posts\/375","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/comments?post=375"}],"version-history":[{"count":0,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/posts\/375\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/media?parent=375"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/categories?post=375"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.samontab.com\/web\/wp-json\/wp\/v2\/tags?post=375"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}