{"id":210530,"date":"2017-08-08T04:12:32","date_gmt":"2017-08-08T08:12:32","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/4-d-camera-could-improve-robot-vision-virtual-reality-and-self-driving-cars-phys-org\/"},"modified":"2017-08-08T04:12:32","modified_gmt":"2017-08-08T08:12:32","slug":"4-d-camera-could-improve-robot-vision-virtual-reality-and-self-driving-cars-phys-org","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/virtual-reality\/4-d-camera-could-improve-robot-vision-virtual-reality-and-self-driving-cars-phys-org\/","title":{"rendered":"4-D camera could improve robot vision, virtual reality and self-driving cars &#8211; Phys.Org"},"content":{"rendered":"<p><p>August 7, 2017          Two 138-degree light field panoramas (top and center) and a    depth estimate of the second panorama (bottom). Credit:    Stanford Computational Imaging Lab and Photonic Systems    Integration Laboratory at UC San Diego    <\/p>\n<p>      Engineers at Stanford University and the University of      California San Diego have developed a       camera that generates four-dimensional images and can      capture 138 degrees of information. The new camerathe      first-ever single-lens, wide field of view, light field      cameracould generate information-rich images and video      frames that will enable robots to better navigate the world      and understand certain aspects of their environment, such as      object distance and surface texture.    <\/p>\n<p>    The researchers also see this technology being used in    autonomous vehicles and augmented and virtual reality    technologies. Researchers presented their new technology at the    computer vision conference CVPR 2017 in July.  <\/p>\n<p>    \"We want to consider what would be the right camera for a robot    that drives or delivers packages by air. We're great at making    cameras for humans but do robots need to see the way humans do?    Probably not,\" said Donald Dansereau, a postdoctoral fellow in    electrical engineering at Stanford and the first author of the    paper.  <\/p>\n<p>    The project is a collaboration between the labs of electrical    engineering professors Gordon Wetzstein at Stanford and Joseph    Ford at UC San Diego.  <\/p>\n<p>    UC San Diego researchers designed a spherical lens that    provides the camera with an extremely wide field of view,    encompassing nearly a third of the circle around the camera.    Ford's group had previously developed the spherical lenses    under the DARPA \"SCENICC\" (Soldier CENtric Imaging with    Computational Cameras) program to build a compact video camera    that captures 360-degree images in high resolution, with 125    megapixels in each video frame. In that project, the video    camera used fiber optic bundles to couple the spherical images    to conventional flat focal planes, providing high-performance    but at high cost.  <\/p>\n<p>    The new camera uses a version of the spherical lenses that    eliminates the fiber bundles through a combination of lenslets    and digital signal processing. Combining the optics design and    system integration hardware expertise of Ford's lab and the    signal processing and algorithmic expertise of Wetzstein's lab    resulted in a digital solution that not only leads to the    creation of these extra-wide images but enhances them.  <\/p>\n<p>    The new camera also relies on a technology developed at    Stanford called light field photography, which is    what adds a fourth dimension to this camerait captures the    two-axis direction of the light hitting the lens and combines    that information with the 2-D image. Another noteworthy feature    of light field photography is that it allows users to refocus    images after they are taken because the images include    information about the light position and direction. Robots    could use this technology to see through rain and other things    that could obscure their vision.  <\/p>\n<p>    \"One of the things you realize when you work with an    omnidirectional camera is that it's impossible to focus in    every direction at oncesomething is always close to the    camera, while other things are far away,\" Ford said. \"Light    field imaging allows the captured video to be refocused during    replay, as well as single-aperture depth mapping of the scene.    These capabilities open up all kinds of applications in VR and    robotics.\"  <\/p>\n<p>    \"It could enable various types of artificially intelligent    technology to understand how far away objects are, whether    they're moving and what they're made of,\" Wetzstein said. \"This    system could be helpful in any situation where you have limited    space and you want the computer to understand the entire world    around it.\"  <\/p>\n<p>    And while this camera can work like a conventional camera at    far distances, it is also designed to improve close-up images.    Examples where it would be particularly useful include robots    that have to navigate through small areas, landing drones and    self-driving cars. As part of an augmented or virtual reality    system, its depth information could result in more seamless    renderings of real scenes and support better integration    between those scenes and virtual components.  <\/p>\n<p>    The camera is currently at the proof-of-concept stage    and the team is planning to create a compact prototype to test    on a robot.  <\/p>\n<p>     Explore further:        Lensless camera technology for adjusting video focus after    image capture  <\/p>\n<p>    More information: Technical paper: <a href=\"http:\/\/www.computationalimaging.org\/w\" rel=\"nofollow\">http:\/\/www.computationalimaging.org\/w<\/a>     04\/LFMonocentric.pdf<\/p>\n<p>        Hitachi today announced the development of a camera        technology that can capture video images without using a        lens and adjust focus after image capture by using a film        imprinted with a concentric-circle pattern instead of ...      <\/p>\n<p>        By combining 3-D curved fiber bundles with spherical        optics, photonics researchers at the University of        California San Diego have developed a compact, 125        megapixel per frame, 360 video camera that is useful for        immersive ...      <\/p>\n<p>        When taking a picture, a photographer must typically commit        to a composition that cannot be changed after the shutter        is released. For example, when using a wide-angle lens to        capture a subject in front of an appealing background, ...      <\/p>\n<p>        Traditional cameraseven those on the thinnest of cell        phonescannot be truly flat due to their optics: lenses        that require a certain shape and size in order to function.        At Caltech, engineers have developed a new camera ...      <\/p>\n<p>        A camera that can record 3D images and video is under        development at the University of Michigan, with $1.2        million in funding from the W.M. Keck Foundation.      <\/p>\n<p>        (Tech Xplore)A team of researchers with the University of        Stuttgart has used advanced 3-D printing technology to        create an extremely small camera that uses foveated imaging        to mimic natural eagle vision. In their paper ...      <\/p>\n<p>        Most of the nuclear reactions that drive the        nucleosynthesis of the elements in our universe occur in        very extreme stellar plasma conditions. This intense        environment found in the deep interiors of stars has made        it nearly ...      <\/p>\n<p>        Energy loss due to scattering from material defects is        known to set limits on the performance of nearly all        technologies that we employ for communications, timing, and        navigation. In micro-mechanical gyroscopes and        accelerometers, ...      <\/p>\n<p>        New results show a difference in the way neutrinos and        antineutrinos behave, which could help explain why there is        so much matter in the universe.      <\/p>\n<p>        A research team at the University of Central Florida has        demonstrated the fastest light pulse ever developed, a        53-attosecond X-ray flash.      <\/p>\n<p>        Engineers at Stanford University and the University of        California San Diego have developed a camera that generates        four-dimensional images and can capture 138 degrees of        information. The new camerathe first-ever single-lens, ...      <\/p>\n<p>        Imperial researchers have tested a 'blued' gauntlet from a        16th-century suit of armour with a method usually used to        study solar panels.      <\/p>\n<p>      Please sign      in to add a comment. Registration is free, and takes less      than a minute. Read more    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Visit link:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/phys.org\/news\/2017-08-d-camera-robot-vision-virtual.html\" title=\"4-D camera could improve robot vision, virtual reality and self-driving cars - Phys.Org\">4-D camera could improve robot vision, virtual reality and self-driving cars - Phys.Org<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> August 7, 2017 Two 138-degree light field panoramas (top and center) and a depth estimate of the second panorama (bottom). Credit: Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego Engineers at Stanford University and the University of California San Diego have developed a camera that generates four-dimensional images and can capture 138 degrees of information.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/virtual-reality\/4-d-camera-could-improve-robot-vision-virtual-reality-and-self-driving-cars-phys-org\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187744],"tags":[],"class_list":["post-210530","post","type-post","status-publish","format-standard","hentry","category-virtual-reality"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/210530"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=210530"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/210530\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=210530"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=210530"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=210530"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}