{"id":1121614,"date":"2024-01-30T22:25:19","date_gmt":"2024-01-31T03:25:19","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/googles-lumiere-brings-ai-video-closer-to-real-than-unreal-the-verge\/"},"modified":"2024-01-30T22:25:19","modified_gmt":"2024-01-31T03:25:19","slug":"googles-lumiere-brings-ai-video-closer-to-real-than-unreal-the-verge","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/google\/googles-lumiere-brings-ai-video-closer-to-real-than-unreal-the-verge\/","title":{"rendered":"Google&#8217;s Lumiere brings AI video closer to real than unreal &#8211; The Verge"},"content":{"rendered":"<p><p>    Googles new video generation AI model Lumiere uses a new diffusion model    called Space-Time-U-Net, or STUNet, that figures out where    things are in a video (space) and how they simultaneously move    and change (time).     Ars Technica reports this method lets Lumiere    create the video in one process instead of putting smaller    still frames together.  <\/p>\n<p>    Lumiere starts with creating a base frame from the prompt.    Then, it uses the STUNet framework to begin approximating where    objects within that frame will move to create more frames that    flow into each other, creating the appearance of seamless    motion.Lumiere also generates 80 frames compared to 25    frames from Stable Video Diffusion.  <\/p>\n<p>    Admittedly, I am more of a text reporter than a video person,    but the sizzle reel Google published, along with a pre-print    scientific paper, shows that AI video generation and editing    tools have gone from uncanny valley to near realistic in just a    few years. It also establishes Googles tech in the space    already occupied by competitors like Runway, Stable Video    Diffusion, or Metas     Emu. Runway, one of the first mass-market text-to-video    platforms,     released Runway Gen-2 in March last year and has started to    offer more realistic-looking videos.Runway videos also    have a hard time portraying movement.  <\/p>\n<p>    Google was kind enough to put clips and prompts on the Lumiere    site, which let me put the same prompts through Runway for    comparison. Here are the results:  <\/p>\n<p>    Yes, some of the clips presented have a touch of artificiality,    especially if you look closely at skin texture or if the scene    is more atmospheric. But look    at that turtle! It moves like a turtle actually would in    water! It looks like a real turtle! I sent the Lumiere intro    video to a friend who is a professional video editor. While she    pointed out that you can clearly tell its not entirely real,    she thought it was impressive that if I hadnt told her it was    AI, she would think it was CGI. (She also said: Its going to    take my job, isnt it?)  <\/p>\n<p>    Other models stitch videos together from generated key frames    where the movement already happened (think of drawings in a    flip book), while STUNet lets Lumiere focus on the movement    itself based on where the generated content should be at a    given time in the video.  <\/p>\n<p>    Google has not been a big player in the text-to-video category,    but it has slowly released more advanced AI models and leaned    into a more multimodal focus. Its Gemini large    language model will eventually bring image generation to    Bard. Lumiere is not yet available for testing, but it shows    Googles capability to develop an AI video platform that is    comparable to  and arguably a bit better than  generally    available AI video generators like Runway and Pika. And just a    reminder, this was where     Google was with AI video two years ago.  <\/p>\n<p>    Beyond text-to-video generation, Lumiere will also allow for    image-to-video generation, stylized generation, which lets    users make videos in a specific style, cinemagraphs that    animate only a portion of a video, and inpainting to mask out    an area of the video to change the color or pattern.  <\/p>\n<p>    Googles Lumiere paper, though, noted that there is a risk of    misuse for creating fake or harmful content with our    technology, and we believe that it is crucial to develop and    apply tools for detecting biases and malicious use cases to    ensure a safe and fair use. The papers authors didnt explain    how this can be achieved.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original post: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.theverge.com\/2024\/1\/27\/24052140\/google-lumiere-ai-video-generation-runway-pika\" title=\"Google's Lumiere brings AI video closer to real than unreal - The Verge\">Google's Lumiere brings AI video closer to real than unreal - The Verge<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Googles new video generation AI model Lumiere uses a new diffusion model called Space-Time-U-Net, or STUNet, that figures out where things are in a video (space) and how they simultaneously move and change (time). Ars Technica reports this method lets Lumiere create the video in one process instead of putting smaller still frames together. Lumiere starts with creating a base frame from the prompt <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/google\/googles-lumiere-brings-ai-video-closer-to-real-than-unreal-the-verge\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[345634],"tags":[],"class_list":["post-1121614","post","type-post","status-publish","format-standard","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1121614"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1121614"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1121614\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1121614"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1121614"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1121614"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}