{"id":1118687,"date":"2023-10-18T02:23:25","date_gmt":"2023-10-18T06:23:25","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/a-deeper-squared-dive-into-ai-harvard-gazette-harvard-gazette\/"},"modified":"2023-10-18T02:23:25","modified_gmt":"2023-10-18T06:23:25","slug":"a-deeper-squared-dive-into-ai-harvard-gazette-harvard-gazette","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/a-deeper-squared-dive-into-ai-harvard-gazette-harvard-gazette\/","title":{"rendered":"A DEEPer (squared) dive into AI  Harvard Gazette &#8211; Harvard Gazette"},"content":{"rendered":"<p><p>    When an algorithm-driven microscopy technique developed in 2021    (and able to run on a fraction of the images earlier techniques    required) isnt fast enough, what do you do?  <\/p>\n<p>    Dive DEEPer, and square it. At least, that was the solution    used by     Dushan Wadduwage, John Harvard Distinguished Science Fellow    at the FAS Center for    Advanced Imaging.  <\/p>\n<p>    Scientists have worked for decades to image the depths of a    living brain.They first tried fluorescence microscopy, a    century-old technique that relies on fluorescent molecules and    light.However, the wavelengths werent long enough and    they scattered before they reached an appreciable distance.  <\/p>\n<p>    The invention of two-photon microscopy in 1990 brought longer    wavelengths of light shine onto the tissue, causing fluorescent    molecules to absorb not one but two photons. The longer    wavelengths used to excite the molecules scattered less and    could penetrate farther.  <\/p>\n<p>    But two-photon microscopy can typically only excite one point    on the tissue at a time, which makes for a long process    requiring many measurements. A faster way to image would be to    illuminate multiple points at once using a wider field of view    but this, too, had its drawbacks.  <\/p>\n<p>    If you excite multiple points at the same time, then you cant    resolve them, Wadduwage said. When it comes out, all the    light is scattered, and you dont know where it comes from.  <\/p>\n<p>    To overcome this difficulty, Wadduwages group began using a    special type of microscopy, described in Science    Advances in 2021. The team excited multiple points on the    tissue in a wide-field mode, using different pre-encoded    excitation patterns. This technique  called     De-scattering with Excitation Patterning, or DEEP  works    with the help of a computational algorithm.  <\/p>\n<p>    The idea is that we use multiple excitation codes, or multiple    patterns to excite, and we detect multiple images, Wadduwage    said. We can then use the information about the excitation    patterns and the detected images and computationally    reconstruct a clean image.  <\/p>\n<p>    The results are comparable in quality to images produced by    point-scanning two-photon microscopy. Yet they can be produced    with just hundreds of images, rather than to the hundreds of    thousands typically needed for point-scanning. With the new    technique, Wadduwages group was able to look as far as 300    microns deep into live mouse brains.  <\/p>\n<p>    Still not good enough. Wadduwage wondered: Could DEEP produce a    clear image with only tens of images?  <\/p>\n<p>    In a recent    paper published in Light: Science and Applications, he    turned to machine learning to make the imaging technique even    faster. He and his co-authors used AI to train a neural    network-driven algorithm on multiple sets of images, eventually    teaching it to reconstruct a perfectly resolved image with only    32 scattered images (rather than the 256 reported in their    first paper). They named the new method DEEP-squared: Deep    learning powered de-scattering with excitation patterning.  <\/p>\n<p>    The team took images produced by typical two-photon    point-scanning microscopy, providing what Wadduwage called the    ground-truth. The DEEP microscope then used physics to make a    computational model of the image formation process and put it    to work simulating scattered input images. These trained their    DEEP-squared AI model. Once AI produced reconstructed images    that resembled Wadduwages ground-truth reference, the    researchers used it to capture new images of blood vessels in a    mouse brain.  <\/p>\n<p>    It is like a step-by-step process, Wadduwage said. In the    first paper we worked on the optics side and reached a good    working state, and in the second paper we worked on the    algorithm side and tried to push the boundary all the way and    understand the limits. We now have a better understanding that    this is probably the best we can do with the current data we    acquire.  <\/p>\n<p>    Still, Wadduwage has more ideas for boosting the capabilities    of DEEP-squared, including improving instrument design to    acquire data faster. He said DEEP-squared exemplifies    cross-disciplinary cooperation, as will any future innovations    on the technology.  <\/p>\n<p>    Biologists who did the animal experiments, physicists who    built the optics, and computer scientists who developed the    algorithms all came together to build one solution, he said.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Originally posted here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/news.harvard.edu\/gazette\/story\/2023\/10\/a-deeper-squared-dive-into-ai\/\" title=\"A DEEPer (squared) dive into AI  Harvard Gazette - Harvard Gazette\">A DEEPer (squared) dive into AI  Harvard Gazette - Harvard Gazette<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> When an algorithm-driven microscopy technique developed in 2021 (and able to run on a fraction of the images earlier techniques required) isnt fast enough, what do you do? Dive DEEPer, and square it. At least, that was the solution used by Dushan Wadduwage, John Harvard Distinguished Science Fellow at the FAS Center for Advanced Imaging.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/a-deeper-squared-dive-into-ai-harvard-gazette-harvard-gazette\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-1118687","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1118687"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1118687"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1118687\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1118687"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1118687"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1118687"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}