{"id":188486,"date":"2017-04-19T10:07:13","date_gmt":"2017-04-19T14:07:13","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence-comes-to-hollywood-studio-daily\/"},"modified":"2017-04-19T10:07:13","modified_gmt":"2017-04-19T14:07:13","slug":"artificial-intelligence-comes-to-hollywood-studio-daily","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/artificial-intelligence-comes-to-hollywood-studio-daily\/","title":{"rendered":"Artificial Intelligence Comes to Hollywood &#8211; Studio Daily"},"content":{"rendered":"<p><p>    Is Your Job Safe?  <\/p>\n<p>    Last September, when the 20th Century Fox sci-fi thriller    Morgan premiered, artificial intelligence (AI) took    center stage for the first time not as a plot point but a tool.    The film studio revealed that it had used IBMs Watson a    supercomputer endowed with AI capabilities to make the    movies trailer. IBM research scientists taught Watson about    horror movie trailers by feeding it 100 such trailers, cut into    scenes. Watson then analyzed the data, from the point of view    of visuals, audio and emotions, to learn what makes a horror    trailer scary. Then the scientists fed in the entire 90-minute    Morgan.     According to Engadget, Watson instantly zeroed in    on 10 scenes totaling six minutes of footage.  <\/p>\n<p>    The media buzz that followed both overstated and understated    what had actually happened. In fact, an actual human being    edited the trailer, using the scenes Watson chose. So AI didnt    actually edit the trailer. But it was also a benchmark,    tantalizing the Hollywood creatives (and studio executives)    interested in how artificial intelligence might change    entertainment.  <\/p>\n<p>      Philip Hodgetts    <\/p>\n<p>    The discussion about AI is still a bit premature; when todays    products are described, machine learning is a more    accurate description. The first person to posit that machines    could actually learn was computer gaming pioneer Arthur    Samuels, in 1959. Based on pattern recognition and dependent on    enough data to train the computer, machine learning is used for    any repetitive task. Philip Hodgetts, who founded two companies    integrating machine learning, Intelligent Assistance    and Lumberjack    System, notes that theres a big leap from doing a task    really well to a generalized intelligence that can do multiple    self-directed tasks. Most experts agree that autonomous cars    are the closest we have today to a real-world artificial    intelligence.  <\/p>\n<p>    Machine learning can and does form an important role in a    growing number of applications aimed at the media and    entertainment business, nearly all of them invisible to the end    user. Perhaps the most obvious ones are the applications aimed    at distribution of digital media.Iris.TV, which partners with numerous    media companies from Time Warners Telepictures Productions to    Hearst Digital Media, uses machine learning to create what it    dubs personalized video programming. The company takes in the    target companys digital assets and creates a taxonomy and    structure, with the metadata forming the basis of    recommendations. The APIs, which integrate with most video    players, learn what the user watches, then create a playlist    based on those preferences. The results are pretty impressive:    The Hollywood Reporter, for example, was able to    double its video views from 80 million in October 2016 to 210    million in February 2017.  <\/p>\n<p>    Machine learning also plays an increasingly significant role in    video post-production much more so than production,    which is still a hands-on, very human job. The production    process is dependent on bipedal mobility, notes Hodgetts    wryly. Weve motorized cranes and so on, but itll be harder    to replace a runner on set. Even so, the process of creating    digital imagery will feel the impact of machine learning in the    not-so-distant future. Adobe, for example, is working with the    Beckman Institute for Advanced Science and Technology to use a    kind of machine learning to teach a software algorithm how to    distinguish and eliminate backgrounds. With the goal of    automating compositing, the software has been taught to do so    via a dataset of 49,300 training images.  <\/p>\n<p>    Todays machine learning-enhanced tools fall under the umbrella    of cognitive services, a term that covers any off-the-shelf    programs that have already been trained at a task, whether its    facial recognition or motion detection. At NAB 2017, Finnish    company Valossa will debut    its Alexa-integrated real-time video recognition platform,    Val.ai.  <\/p>\n<p>    Val.ai is intended to solve the problem of discoverability.    Companies that have lots of media assets and want to monetize    them better fall into this category, says Valossa chief    executive founder Mika Rautiainen. Or they can also re-use    archived material for new content. Increasingly, weve found    other scenarios emerging in the years weve been creating the    service related to content analytics. Deep content    understanding correlated with user behavior lets media    companies serve contextual advertising and other end-user    experiences around media. The Valossa video intelligence    engine is in beta at 120 companies, the majority of which are    in the U.S. and the U.K.  <\/p>\n<p>    Rautiainen states that content analytics can also be used to    promote and sell items in a video, a capability that Valossa is    not developing. But I was surprised how many companies are    working around reinventing retail or the purchasing process,    he says. Valossa also has a technology demo for    facial-expression recognition, which Rautiainen calls a    next-level intelligence, and Valossa Movie Finder, with a    database of metadata from 140,000 movies.  <\/p>\n<p>      Yvonne Thomas    <\/p>\n<p>    Arvato Systems will    debut its next-generation MAM system, Media Portal, at NAB    2017. Yvonne Thomas, the companys product manager for the    broadcast solutions division, says Media Portal integrates    analytics and machine learning via an API, and indexes\/updates    the respective media. It will also support the visualization    for the user in the form of facets that can handle a wide range    of data.  <\/p>\n<p>    At Piksel, chief    technology officer Mark Christie points out that machine    learning capabilities have accelerated dramatically in recent    years and, through natural-language processing techniques, they    can now enable a deeper understanding of content. In 2016,    Piksel acquired Lingospot, with its patented and    patent-pending natural language processing, semantic search,    image analysis and machine-learning technologies, and    integrated it into Piksels Palette, to collect proprietary    metadata on a scene-by-scene basis. Its Fuse, which is built on    Piksel Palette, enriches metadata with cast and crew lists or    other documentation from third-party sources and serves it    across broadcast and OTTworkflows.  <\/p>\n<p>    Although the advent of tools enhanced by machine learning is    interesting, most people in the entertainment industry want to    know how worried they should be about their jobs. Hodgetts has    a simple answer. If you can teach someone your job in three    days, it will be automated [via machine learning], he says.  <\/p>\n<p>    At USC School of Cinematic    Arts, professor and editor Norman Hollyn has been thinking    about the implications of collecting metadata for a long time.    In principle, automation of what used to be a tedious,    labor-intensive job could wreak major changes on the job of the    assistant editor. Hollyn has a more positive spin on the    integration of these new tools.  <\/p>\n<p>    About three years ago, I started realizing the value of    machine learning and artificial intelligence, he says. With    my background, I knew just how difficult it was for humans to    collect data, and I started thinking about how much easier my    work would be if database fields could be automatically    filled.  <\/p>\n<p>    He agrees that machine learning will change the job of the    assistant editor. Historically, even back in the 35mm days,    the assistant editor was really an incredibly specialized    librarian, he says. Its not a huge difference today. But    once machine learning takes over, the librarian work will    easily be taken over.  <\/p>\n<p>    But the results, he thinks, wont be all bad. On some    productions, he believes, that there will be no assistants. On    others, assistants may be involved in such tasks as    world-building for cross-platform media or cutting trailers.    When I think about what my students may be doing in five    years, its bad news if they think they want to be assistant    editors on a TV job, he says. But they can play a role in    building the world out of which comes movies, TV series, games,    VR and comic books. Different people have to organize that    world-building and thats not a machine-learning    capability yet.  <\/p>\n<p>    The post-production environment always feels the downward    budgetary pressure and probably offers less flexibility for    facilityowners trying to keep afloat. AI will be good    and bad for people in our industry, says AlphaDogs Chief Executive    Terence Curren. The level of AI we currently have can already    automate many tasks that used to employ people. Automated    syncing and grouping of clips is just one example. As AI gets    smarter, more jobs will be replaced, but the removal of the    human element will also eliminate many mistakes that currently    cost time down the pipeline. The bottom line is, if you do    something that is repetitive all day, your job will be one of    the first to get replaced. If you do something creative, that    requires constantly changing approaches, your job will be safe    for a long time.  <\/p>\n<p>    For those worried about the ethical considerations of bringing    machine learning and artificial intelligence into the workplace    (as well as potentially hundreds of consumer-facing products    and services), thats being addressed both by giant technology    companies and the IEEE. In September 2016, Google, Facebook,    Amazon, IBM and Microsoft formed the Partnership on Artificial    Intelligence to Benefit People and Society, to advance public    understanding of the technologies and come up with standards.    The Partnership says it plans to conduct research, recommend    best practices and publish research under an open license in    areas such as ethics, fairness and inclusivity; transparency,    privacy and interoperability; collaboration between people and    AI systems; and the trustworthiness, reliability and robustness    of the technology. Apple just joined the group.  <\/p>\n<p>    Meanwhile, the IEEE and its Standards Association created a new    standards project, IEEE P700, a working group that intends to    define a process model by which engineers and technologists can    address ethical considerations throughout the various stages of    system initiation, analysis and design for big data, machine    learning and artificial intelligence.  <\/p>\n<p>    Machine learning is here, and AI is coming, not just to the    entertainment industry but many others. There will be winners    and losers, but the very human talent of creativity a    specialty in the entertainment industry is safe for the    foreseeable future.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to see the original:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.studiodaily.com\/2017\/04\/artificial-intelligence-comes-hollywood\/\" title=\"Artificial Intelligence Comes to Hollywood - Studio Daily\">Artificial Intelligence Comes to Hollywood - Studio Daily<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Is Your Job Safe?  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/artificial-intelligence-comes-to-hollywood-studio-daily\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-188486","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/188486"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=188486"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/188486\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=188486"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=188486"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=188486"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}