{"id":185298,"date":"2017-03-29T11:23:36","date_gmt":"2017-03-29T15:23:36","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/its-a-riot-the-stressful-ai-simulation-built-to-understand-your-emotions-the-guardian-blog\/"},"modified":"2017-03-29T11:23:36","modified_gmt":"2017-03-29T15:23:36","slug":"its-a-riot-the-stressful-ai-simulation-built-to-understand-your-emotions-the-guardian-blog","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/its-a-riot-the-stressful-ai-simulation-built-to-understand-your-emotions-the-guardian-blog\/","title":{"rendered":"It&#8217;s a riot: the stressful AI simulation built to understand your emotions &#8211; The Guardian (blog)"},"content":{"rendered":"<p><p>  A protester hurls a tear gas canister fired by police in  Ferguson, Missouri, on 13 August 2014. Photograph: AP<\/p>\n<p>    An immersive film project is    attempting to understand how people react in stressful    situations by using artificial intelligence (AI), film and    gaming technologies to place participants inside a simulated    riot and then detecting their emotions in real time.  <\/p>\n<p>    Called Riot, the project is the result of a collaboration    between award winning multidisciplinary immersive filmmaker    Karen Palmer and Professor Hongying Meng from Brunel    University. The two have worked together previously on Syncself2, a dynamic interactive video    installation.  <\/p>\n<p>    Riot was inspired by global unrest, and was specifically    inspired by Palmers experience of watching live footage of the    Ferguson    protests in 2015. I felt a big sense of frustration, anger    and helplessness. I needed to create a piece of work that would    encourage dialogue around these types of social issues. Riots    all over the world now seem to be [the] last form of    [community] expression, she said.  <\/p>\n<p>    Whereas Syncself2 used an EEG headset to place the user in the    action, with Riot Palmer wanted to try and achieve a more    seamless interface. Hongying and I discussed AI and facial    recognition; the tech came from creating an experience which    simulated a riot  it needed to be as though you were there.  <\/p>\n<p>    Designed as an immersive social digital experience, the    objective is to get through a simulated riot alive. This is    achieved through interacting with a variety of characters who    can help you reach home. The video narrative is controlled by    the emotional state of the user, which is monitored through AI    software in real time.  <\/p>\n<p>    Machine learning is the key technology for emotion detection    systems. From the dataset collected from audiences, AI methods    are used to learn from the data and build the computational    model which can be integrated into the interactive film system    and detect the emotions in real-time, explained Meng.  <\/p>\n<p>    The programme in development at Brunel can read seven emotions,    but not all are appropriate for the experience created by the    Riot team. Currently,Riots pilot interface can recognise three    emotional states: fear, anger and calm.  <\/p>\n<p>    I tried it along with Dr Erinma Ochu, a lecturer in science    communication and future media at the University of Salford,    whose PhD was in applied neuroscience.  <\/p>\n<p>    Riot is played out on a large screen, with 3D audio sound    surrounding us as a camera watches our facial expressions and    computes in real time how we are reacting. Based on this    feedback, the algorithm determines how the story unfolds.  <\/p>\n<p>    We see looters, anarchists and police playing their parts and    interacting directly with us . What happens next is up to us:    our reactions and responses determine the story, and as the    screen is not enclosed in a headset, but open for others to    see, it also creates a public narrative.  <\/p>\n<p>    Ochu reacted with jumps and gasps to what was happening around    her and ultimately didnt make it home. Its interesting to    try something you wouldnt do in real life so you can explore a    part of your character that you might suppress if you were    going to get arrested, she said.  <\/p>\n<p>    As a scientist and storyteller she felt Riot was ahead of the    curve: This has leapfrogged virtual reality, she said.  <\/p>\n<p>    According to the Riot team, virtual reality (VR) developers    have struggled to create satisfying stories in an environment    in which, unlike film, you cant control where the user looks    or what route they take through the narrative.  <\/p>\n<p>    In order to overcome these issues and create a coherent,    convincing storyline, the team from Brunel re-trained their    software versions of facial recognition technology to work for    Riot. [This] provides a perfect platform to show our research    and development. Art makes our work easier to understand. We    have been doing research in emotion detection from facial    expression, voice, body gesture, EEG, etc for many years, said    Meng. He hopes the projects success will make people see the    benefits of AI, leading to the development of smart homes,    building and cities.  <\/p>\n<p>    For now, the emotion detection tool being worked on at Brunel    can be used in clinical settings to measure pain and emotional    states such as depression in patients. Similar tech has already    been used in a therapeutic setting; a study last    year at the University of Oxford used VR to help those with    persecutory delusions. Those who trialed real life    scenarios combined with cognitive therapy saw significant    improvement in their symptoms.  <\/p>\n<p>    But can Riots current AI facial recognition tech work for    everyone? People with Parkinsons, sight or hearing issues    might need an EEG headset and other physical monitors to gain    the same immersive experience unless tech development rapidly    catches up with Palmers ultimate vision of a 360 degree    screen, which would also allow a group of participants to play    together.  <\/p>\n<p>    Perhaps Riot and its tech could herald a new empathetic,    responsible and responsive future for storytelling and gaming    in which the viewer or player is encouraged to bring about    change both in the narrative and in themselves. After all, if    you could truly see a story from the another persons point of    view what might you learn about them and yourself? How might    you carry those insights into the real world to make a    difference?  <\/p>\n<p>    The V&A will be exhibiting Riot as part of the Digital    Design Weekend September 2017. The project is    currently shortlisted for the Sundance New Frontier    Storytelling Lab.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the rest here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.theguardian.com\/science\/blog\/2017\/mar\/29\/its-a-riot-the-stressful-ai-simulation-built-to-understand-your-emotions\" title=\"It's a riot: the stressful AI simulation built to understand your emotions - The Guardian (blog)\">It's a riot: the stressful AI simulation built to understand your emotions - The Guardian (blog)<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> A protester hurls a tear gas canister fired by police in Ferguson, Missouri, on 13 August 2014. Photograph: AP An immersive film project is attempting to understand how people react in stressful situations by using artificial intelligence (AI), film and gaming technologies to place participants inside a simulated riot and then detecting their emotions in real time <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/its-a-riot-the-stressful-ai-simulation-built-to-understand-your-emotions-the-guardian-blog\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-185298","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/185298"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=185298"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/185298\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=185298"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=185298"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=185298"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}