{"id":220499,"date":"2017-06-17T21:44:22","date_gmt":"2017-06-18T01:44:22","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/a-discussion-about-ais-conflicts-and-challenges-techcrunch.php"},"modified":"2017-06-17T21:44:22","modified_gmt":"2017-06-18T01:44:22","slug":"a-discussion-about-ais-conflicts-and-challenges-techcrunch","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/a-discussion-about-ais-conflicts-and-challenges-techcrunch.php","title":{"rendered":"A discussion about AI&#8217;s conflicts and challenges &#8211; TechCrunch"},"content":{"rendered":"<p><p>    Thirty five years ago having a PhD in computer vision was    considered the height of unfashion, as artificial intelligence    languished at the bottom of the trough of disillusionment.  <\/p>\n<p>    Back then it could take a day for a computer vision algorithm    to process a single image.How times change.  <\/p>\n<p>    The competition for talent at the moment is absolutely    ferocious, agrees Professor Andrew Blake, whose computer    vision PhD was obtained in 1983, but who is now, among other    things, a scientific advisor to UK-based autonomous vehicle    software startup,FiveAI, which is aiming    to trial driverless cars on Londons roads in 2019.  <\/p>\n<p>    Blake founded Microsofts computer vision group, and was    managing director ofMicrosoft Research, Cambridge, where    he was involved in the development of the Kinect sensor  which    was something of an augur for computer visions rising star    (even if Kinect itself did not achieve the kind of consumer    success Microsoft might have hoped).  <\/p>\n<p>    Hes now research director at theAlan Turing    Institute in the UK, which aims to support data science    research, which of course means machine learning and AI, and    includes probing the ethics and societal implications of AI and    big data.  <\/p>\n<p>    So how can a startup like FiveAI hope to compete with tech    giants like Uber and Google, which are also of course working    on autonomous vehicle projects, in this fierce fight for AI    expertise?  <\/p>\n<p>    And, thinking of society as a whole, is it a risk or an    opportunity that such powerful tech giants are throwing    everything theyve got at trying to make AI breakthroughs?    Might the AI agenda not be hijacked, and progress in the field    monopolized, by a set of very specific commercial agendas?  <\/p>\n<p>    I feel the ecosystem is actually quite vibrant, argues Blake,    though his opinion is of course tempered by the fact he was    himself a pioneering researcher working under the umbrella of a    tech giant for many years. Youve got a lot of talented people    in universities and working in an open kind of a way  because    academics are quite a principled, if not even a cussed bunch.  <\/p>\n<p>    Blake says he considered doing a startup himself, back in 1999,    but decided that working for Microsoft, where he could focus on    invention and not have to worry about the business side of    things, was a better fit. Prior to joining Microsoft his    research work included building robots with vision systems that    could react in real time  a novelty in the mid-90s.  <\/p>\n<p>    People want to do it all sorts of different ways. Some people    want to go to a big company. Some people want to do a startup.    Some people want to stay in the university because they love    the productivity of having a group of students and postdocs,    he says. Its very exciting. And the freedom of working in    universities is still a very big draw for people. So I dont    think that part of the ecosystem is going away.  <\/p>\n<p>    Yet he concedes the competition for AI talent is now at fever    pitch  pointing, for example, to startup Geometric    Intelligence, founded by a group of academics    andacquired by Uber at the end of 2016 after    operating for only about a year.  <\/p>\n<p>    I think it was quite a big undisclosed sum, he says of the    acquisition price for the startup. It just goes to show how    hot this area of invention is.  <\/p>\n<p>    People get together, they have some great ideas. In that case    instead of writing a research paper about it, they decided to    turn it into intellectual property  I guess they must have    filed patents and so on  and then Uber looks at that and    thinks oh yes, we really need a bit of that, and Geometric    Intelligence has now become the AI department of Uber.  <\/p>\n<\/p>\n<p>    Blake will not volunteer a view on whether he thinks its a    good thing for society that AI academic excellent is being so    rapidly tractor-beamed into vast, commercial motherships. But    he does have an anecdote that illustrates how conflicted the    field has become as a result of a handful of tech giants    competing so fiercely to dominate developments.  <\/p>\n<p>    I was recently trying to find someone to come and consult for    a big company  the big company wants to know about AI, and it    wants to find a consultant, he tells TechCrunch. They wanted    somebody quite senior and I wanted to find somebody who didnt    have too much of a competing company allegiance. And, you know    what, there really wasnt anybody  I just could not find    anybody who didnt have some involvement.  <\/p>\n<p>    They might still be a professor in a university but theyre    consulting for this company or theyre part time at that    company. Everybody is involved.It is very exciting but    the competition is ferocious.  <\/p>\n<p>    The government at the moment is talking a lot about AI and the    context of the industrial strategy and understanding that its    a key technology for productivity of the nation  so a very    important part of that is education and training. How are we    going to create more excellence? he adds.  <\/p>\n<p>    The idea for the Turing Institute, which was set up in 2015 by    five UK universities, is to play a role here, says Blake, by    training PhD students, and via its clutch of research fellows    who, the hope is, will help form the next generation of    academics powering new AI breakthroughs.  <\/p>\n<p>    The big breakthrough over the last ten years has been deep    learning but I think weve done that now, he argues. People    are of course writing more papers than ever about it. But its    entering a more mature phase where at least in terms of using    deep learning. We can absolutely do it. But in terms of    understanding deep learning  the fundamental mathematics of it     thats another matter.  <\/p>\n<p>    But the hunger, the appetite of companies and universities for    trained talent is absolutely prodigious at the moment  and I    am sure we are going to need to do more, he adds, on education    and expertise.  <\/p>\n<p>    Returning to the question of tech giants dominating AI research    he points out that many of these companies are making public    toolkits available, such as Google, Amazon and Microsoft have    done, to help drive activity across a wider AI ecosystem.  <\/p>\n<p>    Meanwhile academic open source efforts are also making    important contributions to the ecosystem, such as Berkleys    deep learning framework, Caffe. Blakes view therefore is    thata few talented individuals can still make waves     despite not wielding the vast resources of a Google, an Uber or    a Facebook.  <\/p>\n<p>    Often its just one or two people  when you get just a couple    of people doing the right thing its very agile, he says.    Some of the biggest advances in computer science have come    that way. Not necessarily the work of a group of a hundred    people. But just a couple of people doing the right thing.    Weve seen plenty of that.  <\/p>\n<p>    Running a big team is complex, he adds. Sometimes, when you    really want to cut through and make a breakthrough it comes    from a smaller group of people.  <\/p>\n<p>    That said, he agrees that access to data  or, more    specifically the data that relates to your problem, as he    qualifies it  is vital for building AI algorithms. Its    certainly true that the big advance over the last ten years has    depended on the availability of data  often at    Internet-scale, he says. So weve learnt, or weve    understood, how to build algorithms that learn with big data.  <\/p>\n<p>    And tech giants are naturally positioned to feed off of their    own user-generated data engines, giving them a built-in    reservoir for training and honing AI models  arguably locking    in an advantage over smaller players that dont have, for    example in Facebooks case, billions of users generating    data-sets on a daily basis.  <\/p>\n<p>    Although even Google, via its AI division DeepMind, has felt    the need to acquire certain high value data-sets by forging    partnerships with third party institutions     such as the UKs National Health Service, where DeepMind Health    has, since late 2015, been accessing millions of peoples medical data, which the    publicly funded NHS is custodian of, in an attempt to build AIs    that have diagnostic healthcare benefits.  <\/p>\n<p>    Even then, though, the vast resources and high public profile    of Google appears to have given the company a leg up. A smaller    entity approaching the NHS with a request for access to    valuable (and highly sensitive) public sector healthcare data    might well have been rebuffed. And would certainly have been    less likely to have been actively invited in, as DeepMind says    it was. So when its Google-DeepMind offering free help to    co-design a healthcare app or their processing resources and    expertise in exchange for access to data, well, its    demonstrably a different story.  <\/p>\n<p>    Blake declines to answer when asked whether he thinks DeepMind    should have released the names of the people on its AI ethics board. (Next question!) Nor will    he confirm (nor deny) if he is one of the people sitting on    this anonymous board. (For more on his thoughts on AI and    ethics see the additional portions from the interview at the    end of this post.)  <\/p>\n<p>    But he does not immediately subscribe to the view that AI    innovations must necessarily come at the cost of individual    privacy  as some have suggested by, for example, arguing that    Apple is fatally disadvantaged in the AI race because it will    not data-mine and profile its users in the no-holes-barred    fashion that a Google or a Facebook does (Apple has rather    opted to perform local data processing and apply obfuscation    techniques, such as differential privacy, to offer is users AI    smarts that dont require they hand over all their    information).  <\/p>\n<p>    Nor does Blake believe AIs blackboxes are fundamentally    unauditable  a key point given that algorithmic accountability will surely be    necessary to ensure this very powerful technologys societal    impacts can be properly understood and regulated, where    necessary, to avoid bias being baked in. Rather he says    research in the area of AI ethics is still in a relatively    early phase.  <\/p>\n<p>    Theres been an absolute surge of algorithms  experimental    algorithms, and papers about algorithms  just in the last year    or two about understanding how you build ethical principles    like transparency and fairness and respect for privacy into    machine learning algorithms and the jury is not yet out. I    think people have been thinking about it for a relatively short    period of time because its arisen in the general consciousness    that this is going to be a key thing. And so the work is    ongoing. But theres a great sense of urgency about it because    people realize that its absolutely critical. So well have to    see how that evolves.  <\/p>\n<p>    On the Apple point specifically he responds with a no I dont    think so to the idea that AI innovation and privacy might be    mutually exclusive.  <\/p>\n<p>    There will be good technological solutions, he continues.    Weve just got to work hard on it and think hard about it     and Im confident that the discipline of AI, looked at broadly    so thats machine learning plus other areas of computer science    like differential privacy you can see its hot and people are    really working hard on this. We dont have all the answers yet    but Im pretty confident were going to get good answers.  <\/p>\n<p>    Of course not all data inputs are equal in another way when it    comes to AI. And Blake says his academic interest is especially    piqued by the notion of building machine learning systems that    dont need lots of help during the learning process in order to    be able to extract useful understandings from data, but rather    learn unsupervised.  <\/p>\n<p>    One of the things that fascinates me is that humans learn    without big data. At least the storys not so simple, he says,    pointing out that toddlers learn whats going on in the world    around them without constantly being supplied with the names of    the things they are seeing.  <\/p>\n<p>    A child might be told a cup is a cup a few times, but not    that every cup they ever encounter is a cup, he notes. And if    machines could learn from raw data in a similarly lean way it    would clearly be transformative for the field of AI. Blake sees    cracking unsupervised learning as the next big challenge for AI    researchers to grapple with.  <\/p>\n<p>    We now have to distinguish between two kinds of data  theres    raw data and labelled data. [Labelled] data comes at a high    price. Whereas the unlabelled data which is just your    experience streaming in through your eyes as you run through    the world and somehow you still benefit from that, so theres    this very interesting kind of partnership between the labelled    data  which is not in great supply, and its very expensive to    get  and the unlabelled data which is copious and streaming in    all the time.  <\/p>\n<p>    And so this is something which I think is going to be the big    challenge for AI and machine learning in the next decade  how    do we make the best use of a very limited supply of expensively    labelled data?  <\/p>\n<p>    I think what is going to be one of the major sources of    excitement over the next five to ten years, is what are the    most powerful methods for accessing unlabelled data and    benefiting from that, and understanding that labelled data is    in very short supply  and privileging the labelled data. How    are we going to do that? How are we going to get the algorithms    that flourish in that environment?  <\/p>\n<p>    Autonomous cars would be one promising AI-powered technology    that obviously stands to benefit from a breakthrough on this    front  given that human-driven cars are already being equipped    with cameras, and the resulting data streams from cars being    driven could be used to train vehicles to self drive if only    the machines could learn from the unlabelled data.  <\/p>\n<p>    FiveAIs website suggests this goal is also on    its mind  with the startup saying its using stronger AI to    solve the challenge of autonomous vehicles safely navigating    complex urban environments, without needing to have    highly-accurate dense 3D prior maps and localization. A    challenge billed as being defined as the top level in autonomy     5.  <\/p>\n<\/p>\n<p>    Im personally fascinated with how different it is humans    learn from the way, at the moment, our machines are learning,    adds Blake. Humans are not learning all the time from big    data. Theyre able to learn from amazingly small amounts of    data.  <\/p>\n<p>    He citesresearchby MITs Josh Tenenbaum showing    how humans are able to learn new objects after just one or two    exposures. What are we doing? he wonders. This is a    fascinating challenge. And we really, at the moment, dont know    the answer  Ithink theres going to be a big race on,    from various research groups around the world, to see and to    understand how this is being done.  <\/p>\n<p>    He speculates that the answer to pushing forward might lie in    looking back into the history of AI  at methods such as    reasoning with probabilities or logic, previously applied    unsuccessfully, given they did not result in the breakthrough    represented by deep learning, but which are perhaps worth    revisiting to try to write the next chapter.  <\/p>\n<p>    The earlier pioneers tried to do AI using logic and it    absolutely didnt work for a whole lot of reasons, he says.    But one property that logic seems to have, and perhaps we can    somehow learn from this, is this idea of being incredibly    efficient  incredibly respectful if you like  of how costly    the data is to acquire. And so making the very most of even one    piece of data.  <\/p>\n<p>    One of the properties of learning with logic is that the    learning can happen very, very quickly, in the sense of only    needing one or two examples.  <\/p>\n<p>    Its a nice idea that the hyper fashionable research field of    AI, as it now is, where so many futuristic bets are being    placed, might need to look backwards, to earlier apparent    dead-ends, to achieve its next big breakthrough.  <\/p>\n<p>    Though, given Blake describes the success of deep networks as    a surprise to pretty much the whole field (i.e. that the    technology has worked as well as it has) its clear that    making predictions about the forward march of AI is a tricky,    possibly counterintuitive business.  <\/p>\n<p>    As our interview winds up I hazard one final thought  asking    whether, after more than three decades of research in    artificial intelligence, Blake has come up with his own    definition of human intelligence?  <\/p>\n<p>    Oh! Thats much too hard a question for the final question of    the interview, he says, punctuating this abrupt conclusion    with a laugh.  <\/p>\n<p>    On why deep learning is such a black boxI    suppose its sort of like an empirical finding. If you think    about physics  the way experimental physics goes and    theoretical physics, very often, some discovery will be made in    experimental physics and that sort of sets off the theoretical    physics for years trying to understand what was actually    happening. But the way you first got there was with this    experimental observation. Or maybe something surprising. And I    think of deep networks as something like that  its a surprise    to pretty much the whole field that it has worked as well as it    has. So thats the experimental finding. And the actual object    itself, if you like, is quite complex. Because youve got all    of these layers [processing the input] and that happens maybe    ten times And by the time youve put the data through all of    those transformations its quite hard to say what the composite    effect is. And getting a mathematical handle on all of that    sequence of operations. A bit like cooking, I suppose.  <\/p>\n<p>    On designing dedicated hardware for processing    AIIntel build the whole processor and also they    build the equipment you need for an entire data center so    thats the individual processors and the electronic boards that    they sit on and all the wiring that connects these processors    up inside the data center. The wiring actually is more than    just a bit of wire  they call them an interconnect. And its a    bit of smart electronics itself. So Intel has got its hands on    the whole system At the Turing Institute with have a    collaboration with Intel and with them we are asking exactly    that question: if you really have got freedom to design the    entire contents of the data center how can you build the data    center which is best for data science? That really means, to a    large extent, best for machine learning The supporting    hardware for machine learning is definitely going to be a key    thing.  <\/p>\n<p>    On the challenges ahead for autonomous    vehiclesOne of the big challenges in autonomous    vehicles is its built on machine learning technologies which    are  shall we say  quite reliable. If you read machine    learning papers, an individual technology will often be right    99% of the time Thats pretty spectacular for most machine    learning technologies But 99% reliability is not going to be    nearly enough for a safety critical technology like autonomous    cars. So I think one of the very interesting things is how you    combine technologies to get something which, in the aggregate,    at the level of assist, rather than the level of an individual    algorithm, is delivering the kind of very high reliability that    of course were going to demand from our autonomous transport.    Safety of course is a key consideration. All of the engineering    we do and the research we do is going to be building around the    principle of safety  rather than safety as an afterthought or    a bolt-on, its got to be in there right at the beginning.  <\/p>\n<p>    On the need to bake ethics into AI    engineeringThis is something the whole field has    become very well tuned to in the last couple of years, and    there are numerous studies going on In the Turing Institute    weve got a substantial ethics program where on the one hand    weve got people from disciplines like philosophy and the law,    thinking about how ethics of algorithms would work in practice,    then weve also got scientists who are reading those messages    and asking themselves how do we have to design the algorithms    differently if we want them to embody ethical principles. So I    think for autonomous driving one of the key ethical principles    is likely to be transparency  so when something goes wrong you    want to know why it went wrong. And thats not only for    accountability purposes. Even for practical engineering    purposes, if youre designing an engineering system and it    doesnt perform up to scratch you need to understand which of    the many components is not pulling its weight, where do we need    to focus the attention. So its good from the engineering point    of view, and its good from the public accountability and    understanding point of view. And of course we want the public    to feel  as far as possible  comfortable with these    technologies. Public trust is going to be a key element. Weve    had examples in the past of technologies that scientists have    thought about that didnt get public acceptability immediately     GM crops was one  the communication with the public wasnt    sufficient in the early days to get their confidence, and so we    want to learn from those kinds of things. I think a lot of    people are paying attention to ethics. Its going to be    important.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post: <\/p>\n<p><a target=\"_blank\" href=\"https:\/\/techcrunch.com\/2017\/06\/17\/a-discussion-about-ais-conflicts-and-challenges\/\" title=\"A discussion about AI's conflicts and challenges - TechCrunch\">A discussion about AI's conflicts and challenges - TechCrunch<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Thirty five years ago having a PhD in computer vision was considered the height of unfashion, as artificial intelligence languished at the bottom of the trough of disillusionment. Back then it could take a day for a computer vision algorithm to process a single image.How times change <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/a-discussion-about-ais-conflicts-and-challenges-techcrunch.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-220499","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/220499"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=220499"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/220499\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=220499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=220499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=220499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}