Daily Archives: July 21, 2017

Keeper review: A strong focus on security – PCWorld

Posted: July 21, 2017 at 12:17 pm

This password manager serves up peace of mind. Thank you

Your message has been sent.

There was an error emailing this page.

By Michael Ansaldo

Freelance contributor, PCWorld | Jul 21, 2017 3:00 AM PT

Keeper is a no-nonsense password manager that puts the security of your login credentials above all else. However, its lack of automated features may limit its appeal for some.

When you sign up for Keeper, youre prompted to create a master password and select a security question. The latter will be used, along with a verification code andif enabledtwo-factor authentication, to access your data if you forget your master password.

Next, Keeper walks you through a four-step quick start checklist: creating your first record, installing the browser extension, uploading your first file, and enabling two-factor authentication. As you complete each step, the checkmark next to the relevant items turns green.

Keeper's interface isn't fancy but it's easy to get around.

Keeper doesnt automatically capture your login credentials when you sign into a website for the first time. Rather, it places gold lock icons in the username and password fields; you have to click one of these to create a new record. Keeper will prefill the username field with your email address and the password field with a generated 12-character password as if youre creating a new account rather than just a new Keeper record. Youll have to delete these and enter the correct credentials. When you enter your password, Keeper will rate it with a bar that colors red, yellow, or green depending how strong it is.

When you revisit a site, you again have to click the lock icons to access your credentials. When the record for that site open, you must click an arrow icon next to your username and one next to your password to fill each field separately. If youre used to password managers that autofill these fields and autolog you in, these extra steps can feel laborious, even if it is for enhanced security.

Keepers password manager surfaces in the password field as a dice icon any time youre creating a new record, which you can do in the KeeperFill browser plugin or right in your vault. You can generate anywhere from eight- to 51-character passwords using a combination of upper- and lower-case letters, numerals, and symbols.

Keeper's password generator can create up to 51-character complex passwords.

Both Keepers web-based vault and the desktop app display your passwords in a list. Unlike with LastPass and some other managers, Keeper doesnt let you assign logins to folders when its capturing them, but you can do it here by editing the record and assigning it to a folder. You can also audit your passwordsKeeper gives you a strength percentage rating and lets you know if the password has been used for more than one account. Credit cards and personal data can also be stored in your vault and autofilled into web forms when making payments.

Keeper supports password sharing, but, as an added security layer, only with other Keeper users. If you share with a non-Keeper user, theyll get an email with a link to set up an account. It also recently added emergency access, which allows you to grant access to up to five contacts, who can log in in the event you can't for whatever reason.

Keeper is free to use on a single device. To sync across multiple devices, youll need an Individual plan at $30 a year. Family plans cover up to five users for $60 a year.

Despite its bare-bones interface, Keeper offers robust password protection. However, it lacks the automation prized in most password managers, so its unlikely to compete with top tools LastPass and Dashlane. But if youre merely looking for strong security and dont mind being more hands-on with your password manager, Keeper wont disappoint.

If youre looking for strong security and dont mind being more hands on with your password manager, Keeper wont disappoint.

Michael Ansaldo is a veteran consumer and small-business technology journalist. He contributes regularly to TechHive and writes the Max Productivity column for PCWorld.

Read more here:

Keeper review: A strong focus on security - PCWorld

Posted in Mind Uploading | Comments Off on Keeper review: A strong focus on security – PCWorld

Recent Product Launches Expand Stryker’s Orthopedics Business – Market Realist

Posted: at 12:17 pm

Strykers Recent Developments Strengthen Its Market Position PART 4 OF 7

In 1Q17, Stryker (SYK) reported YoY (year-over-year) growth of ~18.5%. One of the contributing factors to its growth is the companys innovative product portfolio. Stryker has been expanding its product portfolio through strategic acquisitions and internal research and development. Stryker invests ~6.5% of its total revenues in research and development.

Orthopedics is the second-largest contributor to the companys sales. The MedSurg segment is the largest contributor. Foot and ankle sales are one of the strong growth businesses in Strykers Orthopedics segment. It has been registering double-digit growth in sales for the last few years. Stryker has positioned itself as the leader in the foot and ankle device business in the US. Zimmer Biomet Holdings (ZBH), Smith & Nephew (SNN), and Integra LifeSciences (IART) are some of the other leading players in the foot and ankle business.

Investors seeking exposure to Stryker can invest in theVanguard Dividend Appreciation ETF (VIG), which holds ~1.5% of its total holdings in Stryker.

On June 28, 2017, Stryker announced the launch of its Hoffmann LRF Hexapod application and hardware. The product features an advanced measurement tool that provides solutions for correcting deformities and limb reconstruction by uploading the patients x-rays into the software.

The product is seen as the first of its kind. It offers top actuating struts instead of side struts, which enables easier reach and management of hardware.

According to Tom Popeck, vicepresident and general manager of Strykers foot and ankle business, Our team is excited to showcase the benefits of the Hoffmann LRF platform and its intuitive software at AOFAS. We believe this modern deformity correction platform helps streamline the surgical planning process and demonstrates our dedication to moving technology forward with our surgeons and patients best interest in mind.

In the next part of this series, well take a look at the lawsuit that the company recently won against Zimmer Biomet Holdings.

Read the rest here:

Recent Product Launches Expand Stryker's Orthopedics Business - Market Realist

Posted in Mind Uploading | Comments Off on Recent Product Launches Expand Stryker’s Orthopedics Business – Market Realist

8 Industries Being Disrupted by Virtual Reality – Entrepreneur

Posted: at 12:16 pm

Free Webinar | August 16th

Find out how to optimize your website to give your customers experiences that will have the biggest ROI for your business. Register Now

For the past several years weve been told that the age of virtual reality is upon us. Tech companies have introduced new hardware and updated systems to much fanfare, but so far have not been able to turn widespread interest into practice.

Virtual reality, and now augmented reality, are often seen as novelties: cool to play with in a store or at that one tech-obsessed friends house, but most of us are not putting on clunky headsets or Googles cardboard system and walking out the door.

However, its finally looking like the VR and AR industries are on the cusp of going mainstream, as industries start to figure out how to implement transformative technology in the user experience. These 10 industries are pioneering ways to integrate VR and AR tech and offer customers more opportunities to explore products and services.

Looking for a new home or apartment can feel like taking on a second job. Between endlessly checking listing updates to taking time to visit every open house on the market, buying (or renting) a new place can be a daunting and tiresome task.

But what if you could experience all that a house has to offer without leaving your home? Real estate companies are toying with VR solutions that offer prospective buyers the chance to walk through a property and survey every room, hallway, nook and cranny without actually leaving their own homes.

Related:Real Estate, Movies, Retail: VR Is Exploding. The Opportunities for Entrepreneurs Are Huge.

Going to zoos gives people the opportunity to experience wildlife up close, albeit behind a sturdy partition. However, zoo trips often spark more questions than they answer. Most zoo experiences consist of visitors wandering from exhibit to exhibit and reading about the species on small placards and in outdated pamphlets.

Guru is an app that is seeking to redefine the zoo experience by bringing the animals and their habitats to (virtual) life. The app allows users to choose customized audio experiences that share facts about specific animals, as well as behind-the-scenes videos and augmented-reality portals into the actual habitats and lifestyles of animals in the wild.

Related:12 Amazing Uses ofVirtual Reality

Every millennial woman remembers the first time she saw Cher Horowitzs closet in Clueless -- it was a magical moment. The idea of being able to test clothes and match outfits without actually having to try them on resonated with an entire generation.

Now, over 20 years since Clueless sparked an obsession, Chers closet, or at least the idea behind it, has become reality. Gap recently unveiled a VR solution that enables customers to digitally try on pieces within its collection. Other retailers are bringing VR headsets into stores to allow visitors to feel as though theyre sitting in the front row at the designers latest fashion-week presentation.

Related:Virtual RealityIs About to Change Your Business

The internet has made the world a smaller place. Thanks to programs like Google Earth, people can walk pathways in Santorini one minute and find themselves at a busy Sydney intersection the next. More travel organizations are tapping into consumers love for virtual exploration.

Expedia recently announced a new VR-based initiative that will allow travelers to step inside hotel-room listings before making their destination decisions.

Related:Why This Restaurant Chain Has Started Using VR to Train Employees

The world of medicine is exploring several avenues and uses for VR to help doctors and patients. Some doctors are now wearingVR headsets in the operating room to give medical students a more in-depth look at the surgical procedures.

Additionally, hospitals are experimenting with VR as a means of making patients feel more comfortable. For example, VisitU, an emerging Dutch company, has created virtual glasses to give children at hospitals the chance to experience life at home or in the classroom, even though they are bedridden.

Related:VRcade: Be the First to Open One in Your Town

Since Hollywoods inception, film studios and production companies have been searching for new ways to make their projects more engaging and lifelike. Now, with virtual technology, film studios have the opportunity to transform the viewing experience from passive to participatory.

Companies like Within are gaining the attention and support of major studios because their technology creates fully immersive viewer experiences that, until recently, Hollywood could only dream of.

Related:Google: 180-Degree Video Is the Future of VR

Many people have a hard time self-motivating when it comes to fitness. It can also be difficult to carve out the time to travel to a gym or fitness studio to take a class. Thanks to emerging VR programs, those wanting to get in shape no longer have to sacrifice their time.

Startups like Icaros are creating fitness solutions that take the boredom out of getting fit. These systems allow users to feel as though theyre actually climbing a rock wall or boxing an opponent, when in fact they havent left their living rooms.

Historically, the automotive industry has needed a physical shopping experience to stay afloat. Before people are willing to make huge investments in new vehicles, they usually want to test the car out for themselves. For this reason, the automotive industry has struggled to find ways to connect with younger generations. Not only are millennials and Gen Zers supporters of the ride-sharing economy; theyre also digitally driven shoppers.

Now, automobile makers like Ford are introducing VR experiences intended to give shoppers a real sense of a cars interior and create a strong enough virtual experience to encourage them to visit a dealership and test drive the real thing.

Deep Patel is the author of A Paperboy's Fable: The 11 Principles of Success. The book was dubbed the #1 best business book in 2016 by Success Magazine and named the best book for entrepreneurs in 2016 by Entrepreneur Magazine.

See the original post:

8 Industries Being Disrupted by Virtual Reality - Entrepreneur

Posted in Virtual Reality | Comments Off on 8 Industries Being Disrupted by Virtual Reality – Entrepreneur

Grindr, virtual reality and vlogging: new ways to talk about sexual health – The Guardian

Posted: at 12:16 pm

Grindr is being used in New York to encourage people to access sexual health prevention services. Photograph: Leon Neal/Getty Images

Almost half the worlds population is online and billions of young people use social media. So why doesnt more sex education happen across these channels? The first Global Advisory Board for Sexual Health and Wellbeing brings together a group of individuals who are using innovative ways to reach more people with information about sex and relationships. Here are some of the projects theyve been working on:

In 2015, Antn Castellanos Usigli, a male nurse working in New York, started working in an HIV/sexually transmitted infections (STIs) prevention clinic at a hospital in Brooklyn. The goal was to increase the number of at-risk patients that came into the clinic for sexual health prevention services. Initially, the clinic tried outreach in clubs and bars in Brooklyn, but not a single client came in through this approach.

Usigli thought about using Grindr, a dating app for gay men, to raise awareness of HIV. He set up a profile as a male nurse to tell at-risk patients about the services offered at the clinic. He then developed a script for healthcare professionals to use.

The success rate has been astonishingly high. In the first month of using the app in this way, more than 20 new at-risk patients came to the clinic for a variety of preventative services, such as sexual health counselling, HIV/STI testing and pre-exposure prophylaxis (PrEP). In little over a year, more than 100 new at-risk patients came into the clinic. Some of those tested positive for HIV and Usigli was able to link them to medical care. Others tested positive for STIs and Usigli was able to treat them.

In India, there are high levels of domestic violence , mostly against women. Both women and men refuse to report such crimes to the police. There is also reluctance in society to acknowledge it as a problem.

In June 2017, Love Matters, a website providing information on relationships, sex and love, produced Indias first virtual reality immersive experience on physical, sexual or psychological harm by a current or former partner or spouse. The film, Kya Yahi Pyar Hai? (Is this love?), uses VR to narrate a powerful story and connect with young people.

The film was shown in pop-up VR booths in pubs, restaurants and metro stations in Delhi for 10 days. The results have been overwhelming. In Delhi central station alone, more than 500 people per day went out of their way to sit in the booths and watch the video. Now, people from across the world are looking to screen the film. It will be shown across different locations in India through partnerships with colleges, universities, restaurants and film clubs.

After graduating from Tbilisi State Medical University with a medical degree, Gvantsa Khizanishvili started working with Planned Parenthood, a not-for-profit organisation that provides sexual healthcare in the US and globally, in Georgia.

Through her work, she found that there were no state-supported sex education programmes in many eastern European and central Asian countries, including Georgia. There was also no information targeted at young people health service providers were not equipped with the skills to meet young peoples needs for information, counselling and confidentiality of services.

To address this, Khizanishvili has developed IntiMate, the first comprehensive youth sexual and reproductive health and rights app in Georgia. The aim is to provide comprehensive sexual health education, raise awareness about the different methods of contraception and sexual health and wellbeing among young people. The app launched in July 2017 and will use social and digital media to provide sex education to young people in Georgia.

Two thousand women aged 15-24 are infected with HIV every week in South Africa, however most of the HIV prevention campaigns are aimed at men.

During her senior years at medical school in rural clinics, Dr Tlaleng Mofokeng, a GP with an interest in sexual health and relationships, realised that young people did not have access to comprehensive information on sexuality.

She uses her significant social media following to deliver sex education. She also developed a 12-part series called Sex State of the Nation on SoundCloud. The series launched in 2016 and reached a wide audience: the vlog on vaginal health has been viewed more than 5,000 times and the one on safe oral sex more than 4,500 times. Her weekly column in the Sunday Times ZA continues to be in the top five most read articles online with a reach of more than 300,000 people.

Sofia Gruskin is the chairperson of the global advisory board for sexual health and wellbeing. She is a professor at the University of Southern California.

Join the Healthcare Professionals Network to read more pieces like this. And follow us on Twitter (@GdnHealthcare) to keep up with the latest healthcare news and views.

If youre looking for a healthcare job or need to recruit staff, visit Guardian Jobs.

Continued here:

Grindr, virtual reality and vlogging: new ways to talk about sexual health - The Guardian

Posted in Virtual Reality | Comments Off on Grindr, virtual reality and vlogging: new ways to talk about sexual health – The Guardian

Christopher Nolan’s ‘Dunkirk’: ‘Virtual Reality Without the Headset’ – Wall Street Journal (blog) (subscription)

Posted: at 12:16 pm

7/21/2017 10:31AM Recommended for you Why McCain's Health News Has Stunned the Capital 7/21/2017 6:00AM Opinion Journal: Jeff Sessions Wants Your Property 7/20/2017 12:59PM Opinion Journal: Will ObamaCare Collapse? 7/20/2017 1:07PM Opinion Journal: TrumpCare Was a Time Bomb 7/20/2017 12:57PM Trump to GOP: Don't Leave Town Without Health-Care Plan 7/19/2017 2:11PM Opinion Journal: The GOP's Health-Care Options 7/20/2017 1:02PM O.J. Simpson Is Granted Parole 7/20/2017 5:48PM Trump Country Believes Trump Fights for Them 7/19/2017 12:01AM Big Sur Road Closures Test Businesses and Locals 7/21/2017 8:30AM Tour de France: The 6,000 Calorie Challenge 7/20/2017 10:22AM GOP, Democrats Differ on Outlook for Health Overhaul 7/18/2017 1:47PM Why McCain's Health News Has Stunned the Capital 7/21/2017 6:00AM

Sen. John McCain's heroism during the Vietnam War, his longtime service as a lawmaker and his reputation for straight talk on tough issues help explain why the news that he had a brain tumor hit Washington so hard, says WSJ's Gerald F. Seib. Photo: AP

On the iPhones 10th birthday, former Apple executives Scott Forstall, Tony Fadell and Greg Christie recount the arduous process of turning Steve Jobss vision into one of the best-selling products ever made.

Attorney General Jeff Sessions said on Thursday he has no plans to leave the Justice Department despite President Donald Trumps comment in a recent interview that he wouldn't have appointed Mr. Sessions if he knew he would recuse himself from probes into the 2016 election. Photo: EUROPEAN PRESSPHOTO AGENCY

Watch the film trailer for "Dunkirk," starring Tom Hardy, Cillian Murphy, and Mark Rylance. Photo: Warner Bros. Pictures

Sen. Lindsey Graham said he spoke with Sen. John McCain after the Arizona senator was diagnosed with a type of brain cancer known as a glioblastoma. Photo: Reuters

The Kalashnikov rifle, or AK-47, is one of the most abundant firearms ever made. But the company behind the iconic weapon is fighting its own battles. WSJ's Niki Blasina reports. Photo: Niki Blasina/The Wall Street Journal

The 2017 Tour de France features nearly 50 first-time riders. The Wall Street Journal spoke to three of them about their highs, lows and hopes for the finish in Paris. Photo: George Downs/The Wall Street Journal

The dating app's popularity is stimulating interest in its parent company's stock.

Originally posted here:

Christopher Nolan's 'Dunkirk': 'Virtual Reality Without the Headset' - Wall Street Journal (blog) (subscription)

Posted in Virtual Reality | Comments Off on Christopher Nolan’s ‘Dunkirk’: ‘Virtual Reality Without the Headset’ – Wall Street Journal (blog) (subscription)

Quad-City Times to present Bix 7 in virtual reality – Quad City Times

Posted: at 12:16 pm

For the record, this also is new to us.

That's not a set up for lower expectations. It's just that plenty of us in news don't have technical brains, but we're getting our geek on for future's sake.

Let me explain: Early this year, our executive editor, Autumn Phillips, got wind of a partnership between Eastern Iowa Community Colleges and a California-based company called EON Reality. Dubbed the Innovation Academy, the digital-tech experts at EON came to Davenport to teach college students how to develop content and tools for the up-and-coming world of virtual reality and augmented reality.

Phillips contacted EICC Chancellor Don Doucette to see if there was room in the partnership for us.

"I've been interested in VR (virtual reality) for a long time," she said. "I wondered how to use this local partnership as a learning experience for the newsroom.

"Don enthusiastically made things happen. Lee Enterprises put up the R&D money we needed for the partnership."

Phillips then asked for newsroom volunteers people who wanted to learn something about virtual reality. Why not?

I had one experience with virtual reality, and I loved it. About a year ago, the Baseball Hall of Fame ("We Are Baseball") trailers showed up in the parking lot at Modern Woodmen Park, and I went down and plopped into a swivel chair and strapped the virtual-reality gear to my head. It was a cool experience, even for someone who can take or leave baseball.

The virtual technology put you right there in the dugout, on the field, behind the plate.

The only downer was the turning in my seat, combined with the subtly unstable camera shots, made me feel woozy. I since have learned that 360-degree viewing makes many people nauseous.

But technology has come a long way.

"If you look at the iPhone 5 and the iPhone 7S, you can see there is much better stability," said Aubrey Jimenez, training coordinator at the Innovation Academy. "Smartphone makers are infinitely aware of what's coming for this technology."

So, what's in it for Quad-City Times readers?

In March, at the first meeting of our little volunteer group -- publisher Deb Anselm, photographer Andy Abeyta, assistant city editor Liz Boardman, Phillips and myself -- we came up with a plan. We asked ourselves: What story could we tell that would best benefit from this 360-degree technology that virtually brings you all along with us?

My mind instantly went to the starting line of the Bix. In the moments leading up to the firing of the starter pistol, the air on Brady Street feels like the air during a lightning storm. As thousands of voices turn into a white-noise hum, goosebumps pop onto your skin in fleshy anticipation.

We decided the Quad-City Times Bix 7 would be the perfect launching pad for our first virtual-reality project. But we wanted to do more than shoot immersive images; we wanted to tell a story. So, we agreed we would find a runner and tell the runner's story.

A standout sprinter at Sherrard High School, Nolte went to Western Illinois University on a track scholarship. Now 31, Nolte is married and the mother of two young girls, working full-time. She started training months ago to run the entirety of the Bix for the first time.

She regards running a treat a way to do something for herself. For those of us who regard running as something to do in an emergency, Nolte's drive is impressive, especially since distance isn't her thing.

The Innovation Academy students followed Nolte on her last Bix at 6 training run.

"We're going to be violating your personal space," Jimenez warned as several students aimed their cell phones and a video camera at Nolte. "We need some close shots."

On Bix 7 race day, we'll have VR professionals from EON Reality Sports filming the event. Nolte will remain in the spotlight from the starting line to the finish line of the Bix, and the thousands of runners and Bix spectators will serve as extras in the 360-degree virtual story that follows.

Our efforts will culminate in a virtual-reality app, called QCT VR. Once the VR experience is ready, we'll provide a link in the weeks following Bix, so everyone can download the free app and follow Nolte's story while reliving the 2017 race. All you need is a smartphone and a set of Google Cardboard glasses. If you want to make sure you don't miss it, sign up for our Bix 7 e-newsletter atqctimes.com/email/We'll send a link to your email.

If you want to learn more about virtual reality or this project, stop by the Quad-City Times booth at the Bix 7 packet pickup on Thursday evening from 5-9 p.m. or Friday, from 9 a.m. to 9 p.m. atRiverCenter South Hall at 136 East Third Street, Davenport. Or visit the Quad-City Times tent in the newspaper parking lot during the race after-party on Saturday.

See the article here:

Quad-City Times to present Bix 7 in virtual reality - Quad City Times

Posted in Virtual Reality | Comments Off on Quad-City Times to present Bix 7 in virtual reality – Quad City Times

How AI Is Already Changing Business – Harvard Business Review

Posted: at 12:16 pm

Erik Brynjolfsson, MIT Sloan School professor, explains how rapid advances in machine learning are presenting new opportunities for businesses. He breaks down how the technology works and what it can and cant do (yet). He also discusses the potential impact of AI on the economy, how workforces will interact with it in the future, and suggests managers start experimenting now. Brynjolfsson is the co-author, with Andrew McAfee, of the HBR Big Idea article, The Business of Artificial Intelligence. Theyre also the co-authors of the new book, Machine, Platform, Crowd: Harnessing Our Digital Future.

Download this podcast

SARAH GREEN CARMICHAEL: Welcome to the HBR IdeaCast from Harvard Business Review. Im Sarah Green Carmichael.

Its a pretty sad photo when you look at it. A robot, just over a meter tall and shaped kind of like a pudgy rocket ship, laying on its side in a shallow pool in the courtyard of a Washington, D.C. office building. Workers human ones stand around, trying to figure out how to rescue it.

The security robot had just been on the job for a few days when the mishap occurred. One entrepreneur who works in the office complex wrote: We were promised flying cars. Instead we got suicidal robots.

For many people online, the snapshot symbolized something about the autonomous future that awaits. Robots are coming, and computers can do all kinds of new work for us. Cars can drive themselves. For some people this is exciting, but there is also clearly fear out there about dystopia. Tesla CEO Elon Musk calls artificial intelligence an existential threat.

But our guest on the show today is cautiously optimistic. Hes been watching how businesses are using artificial intelligence and how advances in machine learning will change how we work. Erik Brynjolfsson teaches at MIT Sloan School and runs the MIT Initiative on the Digital Economy. And hes the co-author with Andrew McAfee of the new HBR article, The Business of Artificial Intelligence.

Erik, thanks for talking with the HBR IdeaCast.

ERIK BRYNJOLFSSON: Its a pleasure.

SARAH GREEN CARMICHAEL: Why are you cautiously optimistic about the future of AI?

ERIK BRYNJOLFSON: Well actually that story you told about the robot that had trouble was a great lead in because in many ways it epitomizes some of the strengths and weaknesses of robots today. Machines are quite powerful and in many ways, theyre superhuman you know just as a calculator can do arithmetic a lot better than me, were having artificial intelligence thats able to do all sorts of functions in terms of recognizing different kinds of cancer images, or now getting superhuman even in speech recognition in some applications but theyre also quite narrow. They dont have general intelligence the way people do. And thats why partnerships of humans and machines are often going to be the most successful in business.

SARAH GREEN CARMICHAEL: You know its funny, cause when you talk about image recognition I think about a fantastic image in your article that is called Puppy or Muffin. I was amazed at how much puppies and muffins look alike in sort of even more amazed that robots can tell them apart.

ERIK BRYNJOLFSSON: Yeah, its a funny image. It always gets a laugh and encourage people to go take a look at it. And there are lots of things that humans are pretty good at in distinguishing different kinds of images. And for a long time, machines were nowhere near as good as recently as seven, eight years ago, machines made about a 30 percent error rate on image net, this big database that Fei Fei Li created of over 10 million images. Now machines are down less, you know, less than 5%, 3-4% depending on how its set up. Humans still have about a 5% error rate. Sometimes they get those puppies and nothings wrong. Be careful what you reach for next time youre at that breakfast bar. But thats a good example.

The reason its improved so much in the past few years is because of this new approach using deep neural nets thats gotten much more powerful for image recognition and really all sorts of different applications. I think thats a big reason why theres so much excitement these days.

SARAH GREEN CARMICHAEL: Yeah, its one of those things where we all kind of like to make fun of machines that get it wrong but also its sort of terrifying when they get it right.

ERIK BRYNJOLFSSON: Yeah. Machines are not going to be perfect drivers, theyre not going to be perfect at making credit decisions that are going to be perfect at distinguishing you know muffins and puppies. And so, we have to make sure we build systems that are robust to those imperfections. But the point we make an article, Andy and I point out that you know humans arent perfect at any of those tasks either. And so, the benchmark for most entrepreneurs and managers is: whos going to be better for solving this particular task or better yet can we create a system that combines the strengths of both humans and machines and does something better than either of them would do individually.

SARAH GREEN CARMICHAEL: With photo recognition and facial recognition, I know that Facebook facial recognition software cant tell the difference between me wearing makeup and me not wearing makeup, which is also sort of funny and horrifying right? But at the same time, you know, I think a lot of us struggle to recognize people out of context, we see someone at the grocery store and we think you know, I know that person from somewhere. So, its something that humans dont always get right either.

ERIK BRYNJOLFSSON: Oh yeah. Im the worlds worst. You know at conferences I would love it if there was a little machine whispering in my ear who this person is and how I met them before. So there, you know, there are those kinds of tradeoffs. But it can lead to some risks. For instance, you know if machines are making bad decisions on important things, like who should get parole or who gets credit or not. That could be really problematic. Worse yet, sometimes they have biases that are built in from the data sets they use. If the people you hire in the past all had a certain kind of ethnic or gender tilt to them, then if you use a training set and teach the machine how to hire people it will learn the same biases that the humans had previously. And, of course, that can be perpetuated and scaled up in ways that we wouldnt like to see.

SARAH GREEN CARMICHAEL: There is a lot of hype right now around AI or artificial intelligence. Some people say machine learning, other people come along and say: hold on hold on hold on, like a lot of this is just software and weve been using it for a long time. So how do you kind of think through the different terms and what they really mean?

ERIK BRYNJOLFSSON: Well theres a really important difference between the way the machines are working now versus previously you know any McAfee and I wrote this book The Second Machine Age where we talked about having machines do more and more cognitive tasks. And for most of the past 30 or 40 years thats been done by us painstakingly programming, writing code of exactly what we want the machine to do. You know if its doing tax preparation, add up this number and multiply it by that number, and of course we had to understand exactly what the task was in order to specify it.

But now the new machine learning approaches literally have the machines learn on their own things that we dont know how to explain the face recognition is a perfect example. It would be really hard for me to describe you know my mothers face, you know how far apart are her eyes or what does her ear look like.

ERIK BRYNJOLFSSON: I can recognize it but I couldnt really write code to do it. And the way the machines are working now is, instead of having us write the code, we give them lots and lots of examples. You know here are pictures of my mom from different perspectives, or here pictures of cats and dogs or heres a piece of speech you know with the word yes and the word no. And if you give them enough examples the machine learning algorithms figure out the rules on their own.

Thats a real breakthrough. It overcomes what we call Polanyis paradox. Michael Polanyi the Polymath and philosopher from the 1960s famously said We all know more than we can tell but with machine learning we dont have to be able to tell or explain what to do. We just have to show examples. That change is whats opening up so many new applications for machines and allowing it to do a whole set of things that previously only humans could do.

SARAH GREEN CARMICHAEL: So, its interesting to think about kind of the human work that has to just go into training the machines like someone who would sit there literally looking at pictures of blueberry muffins and tagging them muffin, muffin, muffin so the machine you know learns thats not a Chihuahua, thats a blueberry muffin. Is that the kind of thing where in the future you could see that kind of rote algorithm, machine training work being kind of a low-paid dead-end job whereas maybe that person once would have had a more interesting job but now the machine has the more interesting job.

ERIK BRYNJOLFSSON: I dont think thats going to be a big source of employment, but it is true there are places like Amazons Mechanical Turk where thousands of people do exactly what you said, they tag images and label them. Thats how ImageNet the database of millions of images got labeled. And so, there are people being hired to do that. Companies sometimes find that training machines by having humans tagged the data is one way to proceed.

But often they can find ways of having data thats already tagged in some way, thats generated from their enterprise resource planning system or from their call center. And if theyre clever, that will lead to the creation of this tag data, and I should back up a bit and say that machines, one of their big weaknesses is that they really do need tag data. Thats the most powerful kind of algorithm, sometimes called supervised learning, where humans have the advanced tag and explained what the data means.

And then the machine learns from those examples and eventually can extrapolate it to other kinds of examples. But unlike humans, they often need thousands or even millions of examples to do a good job whereas you know, a two-year-old probably would learn after one or two times what a cat was versus a dog was that you wouldnt have to show, you know, 10,000 pictures of a cat before they got it.

SARAH GREEN CARMICHAEL: Right. Given where we are with AI and machine learning right now, on balance, do you feel like this is something that is overhyped and people talk too much about sort of too science fiction terms or is it something thats not quite hyped enough and actually people are underestimating what it could do in the relatively near future.

ERIK BRYNJOLFSSON: Well its actually both at the same time, if you can believe it. I think that people have unrealistic expectations about machines having all these general capabilities kind of from watching science fiction like the Terminator. And if a machine can understand Chinese characters you might think it also could understand Chinese speech and it could recommend a good Chinese restaurant, know a little bit about the Xing dynasty and none of that would be true. A machine that can play expert chess cant even play checkers or go or other games. So, in a way theyre very narrow and fragile.

But on the other hand, I think the set of applications for those narrow capabilities is quite large, using that supervised learning algorithms, I think there are a lot more specific tasks that could be done that weve only scratched the surface of and because theyve improved so much in the past five or 10 years, most of those opportunities have not yet really been explored or even discovered yet. Theres a few places where the big giants like Google and Microsoft and Facebook have made rapid progress, but I think that there are literally tens of thousands of more narrow applications that small and medium businesses could start using machine learning for in their own areas.

SARAH GREEN CARMICHAEL: What are some examples of ways that companies are using this technology right now?

ERIK BRYNJOLFSSON: Well one of my favorite ones I learned from my friend Sebastian Thrun Hes the founder of Udacity, the online learning course, which by the way is a good way to learn more about these technologies. But he found that when people were coming to his site and asking questions on the chat room, some of the sales people were doing a really good job of come to the right course and closing the sale and others, well, not so much. This created a set of training data.

He and his grad student realized that if they took the transcripts they would see that certain sets of words in certain dialogues lead to success and sales and others didnt. And he fed that information into a machine learning algorithm and it started identifying which patterns of phrases and answers were the most successful.

But what happened next was I think especially interesting instead of just trying to build a bot that would answer all the questions, they built a bot that would advise the human salespeople. So now when people go to the site the bot kind of looks over the shoulder of the human and when it sees some of those key words it whispers into his or her ear: hey, you know you might want to try this phrase or you might want to point him to this particular course.

And that works well for the most common kinds of queries, but the more obscure ones that the bot has never seen before the human is much better at. And this kind of partnership is a great example of an effective use of AI and also how you can use existing data to turn into a tag data set that the supervised learning system benefits from.

SARAH GREEN CARMICHAEL: So how did these people feel about being coached by a bot?

ERIK BRYNJOLFSSON: Well, its helped them close their sales so its made them more productive. Sebastian says its about 50% more successful when theyre using the bot. So I think its been its been beneficial in helping them learn more rapidly than they would have if they just kind of stumbled all along.

Going forward, I think this is an example of how the bots are often good at the more routine repetitive kinds of tasks, the machines can do the ones that they have lots of data for. And the humans tend to excel at the more unusual tasks for most of us. I think thats kind of a good trade-off. Most of us would prefer having kind of more interest in varied work lives rather than doing the same thing over and over.

SARAH GREEN CARMICHAEL: So, sales is a form of knowledge work right and you sort of gave an example there. One of the big challenges in that kind of work is that you cant its really hard to scale up one persons productivity if you are a law firm, for example, and you want to serve more clients have to hire more lawyers. It sounds like AI could be one way to get finally around that conundrum.

ERIK BRYNJOLFSSON: Yeah AI certainly can be a big force multiplier. Its a great way of taking some of your best, you know, lawyers or doctors and having them explain how they go about doing things and give examples of successes and the machine can learn from those and replicate it or be combined with people who are already doing the jobs and help in a way coached them or handle some of the cases that are most common.

SARAH GREEN CARMICHAEL: So, is it just about being more productive or did you see other examples of human machine collaboration that tackled different types of business challenges?

ERIK BRYNJOLFSSON: Well in some cases its a matter of being more productive, in many cases, a matter of doing the job better than you could before. So there are systems now that can help read medical images and diagnose cancer quite well, the best ones often are still combined with humans because the machines make different kinds of mistakes in the humans so that the machine often will create what are called false positives where it thinks theres cancer but its really not and the humans are better at ruling those out. You know maybe there was an eyelash on the image or something that was getting in the way.

And so, by having the machine first filter through all the images and say hey here are the ones that look really troubling. And then having a human look at those ones and focus more closely on the ones that are problematic, you end up getting much better outcomes than if that person had to look at all the images herself or himself and maybe, maybe overlook some potentially troubling cases.

SARAH GREEN CARMICHAEL: Why now? Because people predicted for a long time that I was just around the corner and sounds like its finally starting to happen and really make its way into businesses. Why are we seeing this finally start to happen right now?

ERIK BRYNJOLFSSON: Yes, thats a great question. Its really the combination of three forces that have come together. The first one is simply that we have much better computer power than we did before. So, Moores Law, the doubling of computer power is part of it. Theres also specialized chips called GPUs and TPUs that are another tenfold or even a hundredfold faster than ordinary chips. As a result, training a system that might have taken a century or more if you done it with 1990s computers can be done in a few days today.

And so obviously that opens up a whole new set of possibilities that just wouldnt have been practical before. The second big force is the explosion of digital data. Data is the lifeblood of these systems, you need them to train. And now we have so many more digital images, digital transcripts, digital data from factory gauges and keeping track of information, and that all can be fed into these systems to train them.

And as I said earlier, they need lots and lots of examples. And now we have digital examples in a way we didnt previously and in the end with the Internet are things you can imagine its going to be a lot more digital data going forward. And last but not least, there have been some significant improvements in the algorithms the men and women working in these fields have improved on the basic algorithms. Some of them were first developed literally 30 years ago, but theyve now been tweaked and improved, and by having faster computers and more data you can learn more rapidly what works and what doesnt work. When you put these three things together, computer power, more data, and better algorithms, you get sometimes as much as a millionfold improvement on some applications, for instance recognizing pedestrians as they cross the street, which of course is really important for applications like self-driving cars.

SARAH GREEN CARMICHAEL: If those are sort of the factors that are pushing us forward, what are some of the factors that might be inhibiting progress?

ERIK BRYNJOLFSSON: Whats not holding us back is the technology, what is holding us back is the imagination of business executives to use these new tools in their businesses. You know, with every general-purpose technology, whether its electricity or the internal combustion engine the real power comes from thinking of new ways of organizing your factory, new ways of connecting to your customers, new business models. Thats where the real value comes. And one of the reasons we were so happy to write for Harvard Business Review was to reach out to people and help them be more creative about using these tools to change the way they do business. Thats where the real value is.

SARAH GREEN CARMICHAEL: I feel like so much of the broader conversation that AI is about, will this create jobs or destroy jobs? And Im just wondering is that a question that you get asked a lot, and are you sick of answering it?

ERIK BRYNJOLFSSON: Well of course it gets asked a lot. And Im not sick of answering because its really important. I think the biggest challenge for our society over the next 10 years is going to be, how are we going to handle the economic implications of these new technologies. And you introduced me in the beginning as a cautious optimist, I think you said, and I think thats about right. I think that if we handle this well this can and should be the best thing that ever happened to humanity.

But I dont think its automatic. Im cautious about that. Its entirely possible for us to not invest in the kind of education and retraining of people to not do the kinds of new policies, to encourage business formation and new business models even. Income distribution has to be rethought and tax policy things like the earned income tax credit in the United States and similar wage subsidies in other countries.

ERIK BRYNJOLFSSON: We need to make a bunch of changes across the board at the policy level. Businesses need to rethink how they work. Individuals need to take personal responsibility for learning the new skills that are going to be needed going forward. If we do all those things Im pretty optimistic.

But I wouldnt want people to become complacent, because already over the past 10 years a lot of people have been left behind by the digital revolution that weve had so far. And looking forward, Id say we aint seen nothing yet. We have incredibly powerful technologies especially in artificial intelligence that are opening up new possibilities. But I want us to think about how we can use technology to create shared prosperity for the many, not just the few.

SARAH GREEN CARMICHAEL: Are there tasks or jobs that machine learning, in your opinion, cant do or wont do?

ERIK BRYNJOLFSSON: Oh, there are so many. Just to be totally clear, most things, machine learning cant do. Its able to do a few narrow areas really, really well. Just like a calculator can do a few things really, really well, but humans are much more general, much more broad set of skills, and the set of skills that humans can do it is being encroached on.

Machines are taking over more and more tasks are combining, teaming up in more and more tasks but in particular, machines are not very good at very broad-scale creativity you know. Being an entrepreneur or writing a novel or developing a new scientific theory or approach, those kinds of creativity are beyond what machines can do today by and large.

Secondly, and perhaps for an even broader impact, is interpersonal skills, connecting with the humans. You know were wired to trust and care for it and be interested in other humans in a way that we arent with other machines.

So, whether its coaching or sales or negotiation or caring for people, persuading, people those are all areas where humans have an edge. And I think there will be an explosion of new jobs whether its for personal coaches or trainers or team oriented activities. I would love to see more people learning those kinds of softer skills that machines are not good at. Thats where theyll be a lot of jobs in the future.

SARAH GREEN CARMICHAEL: I was surprised to see in the article though, that some of these AI programs are actually surprisingly good at recognizing human emotions. I was really startled by that.

ERIK BRYNJOLFSSON: I have to be careful. One of the main things I learned working with Andy and going to visit all these places is never say never, any particular thing that one of us said oh this will never happen, you know, we find out that someone is working in a lab.

So my advice is that their relative strengths and relative weaknesses and emotional intelligence, I still think is a relative strength of humans, but there are particular narrow applications where machines are improving quite rapidly. Affectiva, a company here in Boston has gotten very good at reading emotions, is part of what you need to do to be a good coach to be a caring person, is not the whole picture, but it is one piece of the interpersonal skills that machines are helping with.

SARAH GREEN CARMICHAEL: What do you see as the biggest risks with AI?

ERIK BRYNJOLFSSON: I think there are a few. One of the big risks is that these machine learning algorithms can have implicit biases and they can be very hard to detect or correct. If the training data is biased, has some kind of racial or ethnic or other biases in its data, then those can be perpetuated in the sample. And so, we need to be very careful about how we train the systems and what data we give them.

And its especially important because they dont have the kind of explicit rules that earlier waves of technology had. So, its hard to even know. Its unlikely to have a rule that says, you know, dont give loans to black people or whatever, but it may implicitly have its thumb on the scale in one way or the other if the training data were biased.

SARAH GREEN CARMICHAEL: Right. Because it might notice for instance that, statistically speaking, black people get turned down more for loans that kind of thing.

ERIK BRYNJOLFSSON: Yeah, if the people who you had made those decisions before were biased in a use for the training data that could end up creating a biased training set. And you know, maybe nobody explicitly says that they were biased, but it sort of shows up in other subtle ways based on the, you know, the zip code that someones coming from or their last name or their first name or whatever. So that would be subtle things that you need to be careful of.

The other thing is what we touched on earlier just the whole, whats happening with income inequality and opportunity as the machines get better at many kinds of tasks, you know, driving a truck or handling a call center. The people who had been doing those jobs need to find new things to do. And often those new jobs wont be paying as well if we arent careful. So that could be a real income hit. Already we see growing income inequality.

We have to be aggressive about thinking how we can create broadly shared prosperity. One of the things we did at MIT is we launched something called the Inclusive Innovation Challenge which recognizes and rewards organizations that are using technology to create shared prosperity, theyre innovating in ways that do that. Id love to see more and more entrepreneurs think in that way not just how they can create concentrated wealth, but how they can create broadly shared prosperity.

SARAH GREEN CARMICHAEL: Elon Musk has been out there saying artificial intelligence could be an existential threat to human beings. Other people have talked about fears that the machines could take over and turn against us. How do you feel about those kinds of concerns?

ERIK BRYNJOLFSSON: Well, like I said earlier, you can never say never and, you know, as machines kept getting more and more powerful I can imagine them having enormous powers especially as we delegate more of the operations of our critical infrastructure in our electricity and our water system and our air traffic control and even our military operations to them. But the reason I didnt list it is I dont see it as the most immediate risk right now, the technologies that are being rolled out right now, they have effects on bias and decision making their effect on jobs and income. But by and large they dont have those kinds of existential risks.

I think its important that we have researchers working in those areas and thinking about them but I wouldnt want to, to panic Congress or the people right now into doing something that would probably be counterproductive if we overreacted right now.

I think its an area for research but in terms of devoting billions of dollars of effort, I would put that towards education and retraining and handling bias the things that are facing us right now and will be facing us for the next five and 10 years.

SARAH GREEN CARMICHAEL: What do you feel is the appropriate role of regulation as AI develops?

ERIK BRYNJOLFSSON: I think we need to be watchful, because theres the potential for AI to lead to more concentration of power and more concentration of wealth. The best antidote to that is competition.

And what weve seen the tech industries, for most of the past 10, 20, 30 years is that as one monopolist, whether its IBM or Microsoft, gets a lot of power, another company comes along and knocks it off its perch. I remember teaching a class where about 15 years ago a speaker said you know Yahoo has search locked up no ones ever going to displace Yahoo. So you know we need to be humble and realize that the giants of today face threats and could be overturned.

That said, if there becomes a sort of a stagnant loss of innovation and these companies have a stranglehold on markets and maybe have other adverse effects in areas like privacy, then it would be right for government to step in. My instinct right now would be sort of watchful waiting, keeping an eye on these companies and doing what we could to foster innovation and competition as the best way to protect consumers.

SARAH GREEN CARMICHAEL: So, if all of this still sounds quite futuristic to the average manager, if theyre kind of like: OK, you know this is sort of way outside of what Im working on in my role, what are the sort of things that youd advise people to keep in mind or think about?

ERIK BRYNJOLFSSON: Well it starts with realizing this is not futuristic and way out there. There are lots of small and medium sized companies that are learning how to apply this right now, whether its, you know, sorting cucumbers to be more effective, somebody wrote an application that did that, to helping with recommendations online. Theres a company Im advising called Infinite Analytics that is giving customers better recommendations about what products they should be choosing, to helping with, you know, credit decisions.

There are so many areas where you can apply these technologies right now you can take courses or you can have people in your organization take courses or you can hire people at places like Udacity or fast.ai, my friend Jeremy Howard runs a great course in that area, and put it to work right away and start with something small and simple.

But definitely dont think of this as futuristic. Dont be put off by the science fiction movies whether, you know, the Terminator or other AI shows. Thats not whats going on. Its a bunch of very specific practical applications that are completely feasible in 2017.

SARAH GREEN CARMICHAEL: Erik, thanks so much for talking with us today about all of this.

ERIK BRYNJOLFSSON: Its been a real pleasure.

SARAH GREEN CARMICHAEL: Thats Erik Brynjolfsson. Hes the director of the MIT Initiative on the Digital Economy. And hes the co-author with Andrew McAfee of the new HBR article, The Business of Artificial Intelligence.

You can read their HBR article, and also read about how Facebook uses AI and Machine learning in almost everything you see, and you can watch a video shot in my own kitchen! about how IBMs Watson uses AI to create new recipes. Thats all at hbr.org/AI.

Thanks for listening to the HBR IdeaCast. Im Sarah Green Carmichael.

Here is the original post:

How AI Is Already Changing Business - Harvard Business Review

Posted in Ai | Comments Off on How AI Is Already Changing Business – Harvard Business Review

Graphcore’s AI chips now backed by Atomico, DeepMind’s Hassabis – TechCrunch

Posted: at 12:16 pm

Is AI chipmaker Graphcore out to eat Nvidias lunch? Co-founder and CEO Nigel Toon laughs at that interview opener perhaps because he sold his previous company to the chipmaker back in 2011.

Im sure Nvidia will be successful as well, he ventures. Theyre already being very successful in this market And being a viable competitor and standing alongside them, I think that would be a worthy aim for ourselves.

Toon also flags what he couches an interesting absence in the competitive landscape vis-a-vis other major players that youd expect to be there e.g. Intel. (Though clearly Intel is spending to plug the gap.)

A recent report by analyst Gartner suggests AI technologies will be in almost every software product by 2020. The race for more powerful hardware engines to underpin the machine-learning software tsunami is, very clearly, on.

We started on this journey rather earlier than many other companies, says Toon. Were probably two years ahead, so weve definitely got an opportunity to be one of the first people out with a solution that is really designed for this application. And because were ahead weve been able to get the excitement and interest from some of these key innovators who are giving us the right feedback.

Bristol, UK based Graphcore has just closed a $30 million Series B round, led by Atomico, fast-following a $32M Series A in October 2016. Its building dedicated processing hardware plus a software framework for machine learning developers to accelerate building their own AI applications with the stated aim of becoming the leader in the market for machine intelligence processors.

In a supporting statement, Atomico Partner Siraj Khaliq, who is joining the Graphcore board, talks up its potential as being to accelerate the pace of innovation itself. Graphcores first IPU delivers one to two orders of magnitude more performance over the latest industry offerings, making it possible to develop new models with far less time waiting around for algorithms to finish running, he adds.

Toon says the company saw a lot of investor interest after uncloaking at the time of its Series A last October hence it decided to do an earlier than planned Series B. That would allow us to scale the company more quickly, support more customers, and just grow more quickly, he tells TechCrunch. And it still gives us the option to raise more money next year to then really accelerate that ramp after weve got our product out.

The new funding brings on board some new high profile angel investors including DeepMind co-founder DemisHassabis and Uber chief scientistZoubin Ghahramani. So you can hazard a pretty educated guess as to which tech giants Graphcore might be working closely with during the development phase of its AI processing system (albeit Toon is quick to emphasize that angels such as Hassabis are investing in a personal capacity).

We cant really make any statements about what Google might be doing, he adds. We havent announced any customers yet but were obviously working with a number of leading players here and weve got the support from these individuals which you can infer theres quite a lot of interest in what were doing.

Other angels joining the Series B includeOpenAIs Greg Brockman, Ilya Sutskever,Pieter Abbeel andScott Gray. While existing Graphcore investors Amadeus Capital Partners,Robert Bosch Venture Capital, C4 Ventures, Dell Technologies Capital, Draper Esprit, Foundation Capital, Pitango and Samsung Catalyst Fund also participated in the round.

Commenting in a statement, Ubers Ghahramani argues that current processing hardware is holding back the development of alternative machine learning approaches that he suggests could contribute to radical leaps forward in machine intelligence.

Deep neural networks have allowed us to make massive progress over the last few years, but there are also many other machine learning approaches, he says.A new type of hardware that can support and combine alternative techniques, together with deep neural networks, will have a massive impact.

Graphcore has raised around $60M to date with Toon saying its now 60-strong team has been working in earnest on the business for a full three years, though the company origins stretch back as far as 2013.

Co-founders Nigel Toon (CEO, left) and Simon Knowles (CTO, right)

In 2011 the co-founders sold their previous company, Icera which did baseband processing for 2G, 3G and 4G cellular technology for mobile comms to Nvidia. After selling that company we started thinking about this problem and this opportunity. We started talking to some of the leading innovators in the space and started to put a team together around about 2013, he explains.

Graphcore is building what it calls an IPU aka an intelligence processing unit offering dedicated processing hardware designed for machine learning tasks vs the serendipity of repurposed GPUs which have been helping to drive the AI boom thus far. Or indeed the vast clusters of CPUs needed (but not well suited) for such intensive processing.

Its also building graph-framework software for interfacing with the hardware, called Poplar, designed to mesh with different machine learning frameworks to enable developers to easily tap into a system that it claims will increase the performance of both machine learning training and inference by 10x to 100x vs the fastest systems today.

Toon says its hoping to get the IPU in the hands of early access customers by the end of the year. That will be in a system form, he adds.

Although at the heart of what were doing is were building a processor, were building our own chip leading edge process, 16 nanometer were actually going to deliver that as a system solution, so well deliver PCI express cards and well actually put that into a chassis so that you can put clusters of these IPUs all working together to make it easy for people to use.

Through next year well be rolling out to a broader number of customers. And hoping to get our technology into some of the larger cloud environments as well so its available to a broad number of developers.

Discussing the difference between the design of its IPU vs GPUs that are also being used to power machine learning, he sums it up thus: GPUs are kind of rigid, locked together, everything doing the same thing all at the same time, whereas we have thousands of processors all doing separate things, all working together across the machine learning task.

The challenge that [processing via IPUs] throws up is to actually get those processors to work together, to be able to share the information that they need to share between them, to schedule the exchange of information between the processors and also to create a software environment thats easy for people to program thats really where the complexity lies and thats really what we have set out to solve.

I think weve got some fairly elegant solutions to those problems, he adds. And thats really whats causing the interest around what were doing.

Graphcores team is aiming for a completely seamless interface between its hardware via its graph-framework and widely used high level machine learning frameworks including Tensorflow, Caffe2, MxNet and PyTorch.

You use the same environments, you write exactly the same model, and you feed it through what we call Poplar [a C++ framework], he notes. In most cases that will be completely seamless.

Although he confirms that developers working more outside the current AI mainstream say by trying to create new neural network structures, or working with other machine learning techniques such as decision trees or Markov field may need to make some manual modifications to make use of its IPUs.

In those cases there might be some primitives or some library elements that they need to modify, he notes. The libraries we provide are all open so they can just modify something, change it for their own purposes.

The apparently insatiable demand for machine learning within the tech industry is being driven at least in part by a major shift in the type of data that needs to be understood from text to pictures and video, says Toon. Which means there are increasing numbers of companies that really need machine learning. Its the only way they can get their head around and understand what this sort of unstructured data is thats sitting on their website, he argues.

Beyond that, he points to various emerging technologies and complex scientific challenges its hoped could also benefit from accelerated development of AI from autonomous cars to drug discovery with better medical outcomes.

A lot of cancer drugs are very invasive and have terrible side effects, so theres all kinds of areas where this technology can have a real impact, he suggests. People look at this and think its going to take 20 years [for AI-powered technologies to work] but if youve got the right hardware available [development could be sped up].

Look at how quickly Google Translate has got better using machine learning and that same acceleration I think can apply to some of these very interesting and important areas as well.

In a supporting statement, DeepMinds Hassabis goes to far as to suggest that dedicated AI processing hardware might also offer a leg up to the sci-fi holy grail goal of developing artificial general intelligence (vs the more narrow AIs that comprise the current cutting edge).

Building systems capable of general artificial intelligence means developing algorithms that can learn from raw data and generalize this learning across a wide range of tasks. This requires a lot of processing power, and the innovative architecture underpinning Graphcores processors holds a huge amount of promise, he says.

Read more from the original source:

Graphcore's AI chips now backed by Atomico, DeepMind's Hassabis - TechCrunch

Posted in Ai | Comments Off on Graphcore’s AI chips now backed by Atomico, DeepMind’s Hassabis – TechCrunch

Google’s Newest AI is Turning Street View Images into Landscape Art – Futurism

Posted: at 12:16 pm

In Brief Google engineers have created an artificial intelligence (AI) that is capable of turning Google Street View images into professional-quality artistic portraits. The AI chooses and crops the image, alters both light and coloration, and then applies an appropriate filter. Google Art

Most of us are probably familiar with Google Street View; a feature of Google Maps that allows users to see actual images of the areas theyre looking up. Its both a useful navigational feature and one that allows people to explore far-off regions just for fun. Engineers at Google are taking these images from Street View one step further with the help of artificial intelligence (AI).

Hui Feng is one of several software engineers who are using machine learning techniques to teach a neural network how to scan Street View in search of exceptionally beautiful images. This AI then, on its own, mimics the workflow of a professional photographer.

This AI system will act as an artist and photo editor, recognizing beauty and specific aspects that make for a good photograph. Despite being a subjective matter, the AI proved to be successful, creating professional-quality imagery from Street View images that the system itself located.

Googles many different AI programs have been exploring a wide variety of potential applications for the technology. From recent dabbling in online Go playingto improving job huntingand even creating its own AI better than Google engineers, Googles AI has been at the forefront of its field.

But AI technologies are progressing faster and further than many have expected, so much so that some AI, like the one mentioned here, are capable of creating art. So, while robots will never make humans completely obsolete in artistic endeavors, this step forward marks a new era of technology.

Read the original:

Google's Newest AI is Turning Street View Images into Landscape Art - Futurism

Posted in Ai | Comments Off on Google’s Newest AI is Turning Street View Images into Landscape Art – Futurism

Have We Reached Peak AI Hysteria? – Niskanen Center (press release) (blog)

Posted: at 12:16 pm

July 21, 2017 by Ryan Hagemann

At the recent annual meeting of the National Governors Association, Elon Musk spoke with his usual cavalier optimism on the future of technology and innovation. From solar power to our place among the stars, humanitys future looks pretty bright, according to Musk. But he was particularly dour on one emerging technology that supposedly poses an existential threat to humankind: artificial intelligence.

Musk called for strict, preemptive regulations on developments in AI, referencing numerous hypothetical doomsaying scenarios that might emerge if we go too far too fast. Its not the first time Musk has said that AI could portend a Terminator-style future, but it does seem to be the first time hes called for such stringent controls on the technology. And hes not alone.

In the preface to his book Superintelligence, Nick Bostrom contends that developing AI is quite possibly the most important and most daunting challenge humanity has ever faced. Andwhether we succeed or failit is probably the last challenge we will ever face. Even Stephen Hawking has jumped on the panic wagon.

These concerns arent uniquely held by innovators, scientists, and academics. A Morning Consult poll found that a significant majority of Americans supported both domestic and international regulations on AI.

All of this suggests that we are in the midst of a full blown AI techno-panic. Fear of mass unemployment from automation and public safety concerns over autonomous vehicles have only exacerbated the growing tensions between man and machine.

Luckily, if history is any guide, the height of this hysteria means were probably on the cusp of a period of deflating dread. New emerging technologies often stoke frenzied fears over worst-case scenariosat least at the beginning. These concerns eventually rise to the point of peak alarm, followed by a gradual hollowing out of panic. Eventually, the technologies that were once seen as harbingers of the end times become mundane, common, and indispensable parts of our daily lives. Look no further than the early days of the automobile, RFID chips, and the Internet; so too will it be with AI.

Of course detractors will argue that we should hedge against worst-possible outcomes, especially if the costs are potentially civilization-ending. After all, if theres something the government could do to minimize the costs while maximizing the benefits of AI, then policymakers should be all over that. So whats the solution?

Gov. Doug Ducey (R-AZ) asked that very question: Youve given some of these examples of how AI can be an existential threat, but I still dont understand, as policymakers, what type of regulations, beyond slow down, which typically policymakers dont get in front of entrepreneurs or innovators should be enacted. Musks response? First, government needs to gain insight by standing up an agency to make sure the situation is understood. Then put in place regulations to protect public safety. Thats it. Well, not quite.

The government has, in fact, already taken a stab at whether or not such an approach would be an ideal treatment of this technology. Last year, the Obama administrations Office of Science and Technology Policy released a report on the future of AI, derived from hundreds of comments from industry, civil society, technical experts, academics, and researchers.

While the report recognized the need for government to be privy to ongoing developments, its recommendations were largely benignand it certainly didnt call for preemptive bans and regulatory approvals for AI. In fact, it concluded that it was very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years.

In short, put off those end-of-the-world parties, because AI isnt going to snuff out civilization any time soon. Instead, embracing preemptive regulations could just smother domestic innovation in this field.

Despite Musks claims, firms will actually outsource research and development elsewhere. Global innovation arbitrage is a very real phenomenon in an age of abundant interconnectivity and capital that can move like quicksilver across national boundaries. AI research is even less constrained by those artificial barriers than most technologies, especially in an era of cloud computing and diminishing costs to computer processing speedsto say nothing of the rise of quantum computing.

Musks solution to AI is uncharacteristically underwhelming. New federal agencies that impose precautionary regulations on AI arent going to chart a better course to the future, any more than preemptive regulations for Google would have paved the way to our current age of information abundance.

Musk of all people should know the future is always rife with uncertaintyafter all, he helps construct it with each new revolutionary undertaking. Imagine if there had been just a few additional regulatory barriers for SpaceX or Tesla to overcome. Would the world have been a better place if the public good demanded even more stringent regulations for commercial space launch or autopilot features? Thats unlikelyand, notwithstanding Musks apprehensions, the same is probably true for AI.

Excerpt from:

Have We Reached Peak AI Hysteria? - Niskanen Center (press release) (blog)

Posted in Ai | Comments Off on Have We Reached Peak AI Hysteria? – Niskanen Center (press release) (blog)