{"id":223584,"date":"2017-06-26T18:30:44","date_gmt":"2017-06-26T22:30:44","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/googles-latest-ai-experiment-lets-software-autocomplete-your-doodles-the-verge.php"},"modified":"2022-06-15T20:45:49","modified_gmt":"2022-06-16T00:45:49","slug":"googles-latest-ai-experiment-lets-software-autocomplete-your-doodles-the-verge","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/googles-latest-ai-experiment-lets-software-autocomplete-your-doodles-the-verge.php","title":{"rendered":"Google&#8217;s latest AI experiment lets software autocomplete your doodles &#8211; The Verge"},"content":{"rendered":"<p><p>    Google Brain, the search giants internal artificial    intelligence division, has been making substantial progress on    computer vision techniques that let software parse the contents    of hand-drawn images and then recreate those drawings on the    fly. The latest release from the divisions AI experiments    series is a new web app that lets you collaborate    with a neural network to draw doodles of everyday objects.    Start with any shape, and the software will then auto-complete    the drawing to the best of its ability using predictions and    its past experience digesting millions of user-generated    examples.  <\/p>\n<p>    Googles AI is constantly improving thanks to human-drawn    doodles  <\/p>\n<p>    The software is called Sketch-RNN, and Google researchers        first announced it back in April. At the time, the team    behind Sketch-RNN revealed that the underlying neural net is    being continuously trained using human-made doodles sourced    from a different AI experiment     first released back in November called Quick, Draw! That    program asked human users to draw various simple objects from a    text prompt, while the software attempted to guess what it was    every step of the way. Another spinoff from Quick, Draw! is a        web app called AutoDraw, which identified poorly hand-drawn    doodles and suggested clean clip art replacements.  <\/p>\n<p>    All of these programs improve over time as more people use them    and keep feeding the AI learning mechanism instructive data.    The end goal, it appears, is to teach Google software to    contextualize real-world objects and then recreate them using    its understanding of how the human brain draws connections    between lines, shapes, and other image components. From there,    Google could reasonably deploy even better versions of its    existing image recognition tools, or perhaps even train future    AI algorithms to help robots tag and identify their    surroundings.  <\/p>\n<p>    In the case of this new web app, users can now work alongside    Sketch-RNN to see how well it takes a starting shape and    transforms it into the desired object or thing youre trying to    draw. For instance, select pineapple from the drop-down list    of preselected subjects and start with just an oval. From    there, Sketch-RNN attempts to make sense of the objects    orientation and decides where to try and doodle in the fruits    thorny protruding leaves:  <\/p>\n<p>    The image list is pretty diverse, with everything from fire    hydrant to power outlet to the Mona Lisa. Sketch-RNN is also    pretty hit or miss when it comes to more complicated drawings.    This is the software trying its (virtual and disembodied) hand    at doodling a roller coaster:  <\/p>\n<p>    There are a number of other Sketch-RNN demos you can check out    to get a deeper understanding for how the program functions.    One, called Multiple Predict, lets Sketch-RNN generate    numerous different versions of the same subject. For instance,    when given a prompt to draw a mosquito, you just need to draw    what looks like a thorax or abdomen and Sketch-RNN will take it    from there while showing you how else it predicts the image    could be completed:  <\/p>\n<p>    There are two other demos, titled Interpolation and    Variational Auto-Encoder, that will have Sketch-RNN try to    move between two different types of similar drawings in real    time and also try to mimic your drawing will slight tweaks it    comes up with on its own:  <\/p>\n<p>    The whole set of programs is a fascinating look underneath the    hood of modern computer vision and image and object recognition    tool sets tech companies have at their disposal. If you dont    mind drawing crudely with a computer mouse or trackpad and have    some free time on your hands, its worth an afternoon trying to    see how much better  or demonstrably worse  Sketch-RNN make    can make your doodles.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the article here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.theverge.com\/2017\/6\/26\/15877020\/google-ai-experiment-sketch-rnn-doodles-quick-draw\" title=\"Google's latest AI experiment lets software autocomplete your doodles - The Verge\">Google's latest AI experiment lets software autocomplete your doodles - The Verge<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Google Brain, the search giants internal artificial intelligence division, has been making substantial progress on computer vision techniques that let software parse the contents of hand-drawn images and then recreate those drawings on the fly. The latest release from the divisions AI experiments series is a new web app that lets you collaborate with a neural network to draw doodles of everyday objects <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/googles-latest-ai-experiment-lets-software-autocomplete-your-doodles-the-verge.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-223584","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":"Danzig","_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/223584"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=223584"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/223584\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=223584"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=223584"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=223584"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}