Daily Archives: June 1, 2017

Surgery in virtual reality: How VR could give trainee doctors the feel of real patients – ZDNet

Posted: June 1, 2017 at 10:39 pm

A virtual operating theatre is helping train up surgeons on new procedures.

Virtual reality is often touted as a way of creating fantasy universes, but it could also turn out to be an effective way of teaching skills that are hard to practice in the real world.

Take training up the doctors of tomorrow, for example. US university Case Western has already announced it plans to do away with its anatomy labs, and the cadavers that go with them, and teach medical students with Microsoft's HoloLens 'mixed reality' system instead. Aspiring doctors will be able to wear HoloLens headsets, and view the different layers of a body -- skin, muscle, blood vessels, and so on -- in 3D.

But going one step further, one UK company is trying to recreate the hands-on aspects of surgery in a VR setting, allowing students to get a sense of how the human body feels in using haptic feedback.

Fundamental VR, based in London and Guildford, has added a haptics element to virtual reality to allow medics to train without having to test out their nascent skills on an actual patient.

The system combines the HoloLens headset and the company's software with a stylus connected to a standard-issue mechanical arm.

The stylus appears as a syringe in the VR world the wearer sees, with one button to empty the syringe, and another to refill it.

Moving the stylus in the real world moves the syringe in the simulation, and when the virtual needle meets the virtual skin, flesh, or bone, the varying resistance of the material is transmitted through the stylus to the user, giving them a powerful facsimile of a real-live body.

The idea is that encountering different elements of the body -- like fat or bone -- should feel very different.

The first system, set up to resemble a total knee arthroscopy, was custom-built for the drug company Pacera to teach clinicians how to do a procedure using one of its products, an anaesthetic called Exparel.

Unlike traditional most anaesthetics, where a larger dose is injected in one go and spreads out widely from the injection site, Exparel is injected in several doses and stays largely where it's put. For some surgeons, the change in procedure was difficult to grasp, and so the VR teaching tool was born.

The imagery for the system was created by taking a series of photos of a knee to build up its 3D counterpart. Off-the-shelf haptic hardware is used to stand in for the syringe in the Pacera system, but could equally act as any medical tool that's needed.

Surgeons weigh in

Building a system that could faithfully recreate the experience of surgery required a mixture of human and technological smarts. In order to build the VR setup for knee replacement surgery, the company canvassed the opinions of orthopaedic surgeons on the steps that make up each procedure.

"Surgery is about science, but also about art, and where there's art, there's opinion. Getting to a common standard where people agree what's the right way to do that and on best practice, that took some time. Once we had got that, we were ready to start embracing some of the challenges of texture and tissue types and how those change throughout the procedure," Richard Vincent, cofounder of Fundamental VR, told ZDNet.

Next the surgeons were enlisted to help convert the real-life experience of surgery into a virtual version, with the company's haptics development engine the bridging the real and VR world.

"We built a calibration tool: it's the core of our software that allows us to quickly translate what are quite difficult things to communicate into numbers... we started with people saying 'it's like sticking a needle into an orange' or 'it's like chicken', then you can basically adjust it in real time until they agree it's how they feel and average that out," Vincent said.

Dr Stan Dysart, an orthopaedic surgeon that specialises in joint replacements at Georgia-based Pinnacle Orthopaedics, was among the surgeons that contributed their first-person perspective of knee operations to help the system recreate the authentic feel of surgery, assigning each element of the human body a number that corresponds to a certain texture.

"The haptic device has a scoring system, and I helped them decide what a needle feels like in capsule, what it feels like in muscle, in fat, in periosteum, and what it feels like on bone," he said. Fat, for example, is extremely forgiving, while the capsule has a fibrous, plastic-like texture.

"The capsule has a certain resistance, and when you go through the capsule, resistance releases, so you can score that -- you can score that [level of haptic] feedback, and score a different feedback for every part of the knee. You give it a number, and computers understand numbers -- the higher the number, the greater the resistance the surgeon will feel," Dysart said.

Once the surgeons have agreed on the haptic-feedback rating for each layer of the body, from muscle to bone, the haptics system can translate that back into the level of feedback the VR wearer will feel when they apply the virtual syringe -- a matter of balancing the amount of processing that the scenario needs with the abilities of the GPU underpinning the system to produce a smooth experience for the user.

The total knee arthroscopy-related system is already being used by surgeons in centres across the US, and it's helping surgeons refine their techniques, according to Dysart.

"Surgeons love it. They enjoy the experience, they enjoy practicing without potentially damaging a live patient. That's where it's important. Everything we do in live surgery has a consequence -- how deep do you cut? where do you cut? where do you inject? -- because there are nerves and arteries all about the knee.

"In virtual reality, if you plunge the needle too deeply, nothing is injured. You realise you've done it incorrectly, and you can do it over and over until you have the right technique. That's the beauty of this," he said.

Alongside the total knee arthroscopy, Fundamental VR has three more custom setups in the pipeline including a soft tissue and a spine procedure that it expects will go live at some point this summer.

Teaching tool

Fundamental VR is already talking to educational institutions about how haptics-based systems could be used to teach students to improve their skills or help established doctors learn new procedures before they try them out on the wards. For now, Fundamental VR is concentrating on the US market, though it has had conversations with teaching facilities both in London and abroad.

As well as building more specific one-off systems for clients in future, the company also expects to create a library of common procedures that can be accessed on a subscription basis. Long with 'standard' anatomy, the company could potentially create variants to introduce students to some of the rarer anatomical variations or conditions.

"[Removal of] the appendix is still the most performed operation, so having a better way of teaching that would be useful for lots of people, but on the flip side, there's a lot of opportunity [for doctors] to be around that and observe that," Fundamental VR's Vincent said.

"But if you go into neurology, there may be something you only see three times a year, but it's a life-and-death situation. The number of people that need that training is much less and it might be harder to make the business case, but the human case in much stronger," he continued.

While haptics system might not go over well with all surgeons -- some more senior clinicians found the simulations a bit too close to computer gaming -- the company prefers to liken it to the way pilots use flight simulators.

"We go to lots of conferences where we talk to lots of surgeons about how, say, when you face this bleed at this moment, and you've got five minutes to deal with it, it's never going to make that a less traumatic moment when it happens, but if you go through a simulator that gets you close to it a few times, that has got to be good thing," Vincent explained.

Practicing on VR, not patients

Once mixed and virtual reality become cheaper and more common, haptics and VR could be used to create models of individual patients before they undergo surgery.

"If you could create from those scans something where we could share the kidney, move it around, agree how to get in there, what's the plan, how do we make the surgery the quickest and most effective, that would be good for patient safety," Chris Scattergood, Fundamental VR's co-founder, said.

The future of medical VR, then, will be a mix of teaching students and professionals how to do high-volume, routine operations of the kind that are done hour after hour in hospitals across the world, as well as to understand niche procedures that clinicians at the highest level may only see once or twice in their lives.

Either way, while doctors are practising the skills they need to perform the procedures, they'll be learning virtually, making their mistakes away on a computer system and perfecting their techniques long before they get to use them on their patients.

Visit link:

Surgery in virtual reality: How VR could give trainee doctors the feel of real patients - ZDNet

Posted in Virtual Reality | Comments Off on Surgery in virtual reality: How VR could give trainee doctors the feel of real patients – ZDNet

Virtual Reality for Decommissioning Nuclear Reactors – R & D Magazine

Posted: at 10:39 pm

Safely decommissioning any nuclear reactor is a challenge. However, how do you decommission a Cold War-era production nuclear reactor thats more than 60 years old? This is the problem that engineers are facing at the Savannah River Site (SRS), a 310 square mile Department of Energy site in rural South Carolina constructed in 1952 to help the U.S. produce nuclear weapons. The five reactors at SRS known as R, P, K, L, and C were once used to produce plutonium and tritium. When the Cold War ended, their products were no longer needed, and the last of them was operational in 1992. But the story doesnt end there. Closing nuclear reactors is a huge job that must be done properly, and this is the mission of the DOE Environmental Management Office. The work continues with planning for decommissioning of C Reactor.

What lies inside?

The P and R reactors were decommissioned simultaneously. The process included the removal of millions of gallons of water and the pouring of over 200,000 cubic yards of grout. To assist in the planning of this process, engineers and designers at Savannah River National Laboratory (SRNL) reviewed thousands of construction drawings for the buildings and key pieces of equipment. The team quickly realized it was difficult to fully understand what was inside the reactors because the drawings were a guide for construction, organized by phase of construction and craft. This meant that there was no real map for what was inside the building, as there was no single drawing that could provide all of the relevant information for any given room.

To help provide the decommissioning team with a sense of space inside the reactors, the SRNL team created 3D CAD models and 3D printed models of the building structures and key equipment. Once completed, the printed models helped the team understand the building better because it presented the layers of data in a way that humans normally process datain three dimensions. Even engineers with years of experience need to interpret two dimensional drawings into a 3D image. When the information is spread across as many drawings, interpreting the data becomes a serious challenge.

The 3D printed models also improved the safety of the decommission teams on the ground. Every entry of workers into the facilities exposed them to various dangers; tripping hazards, heat stress, and radiation exposure. Having models available for review offsite reduced the number of walkdowns required in the actual buildings and allowed the teams to plan movements more effectively before entering the facilities.

Read more:

Virtual Reality for Decommissioning Nuclear Reactors - R & D Magazine

Posted in Virtual Reality | Comments Off on Virtual Reality for Decommissioning Nuclear Reactors – R & D Magazine

The 5 Virtual Reality Experiences to Try on Your Phone – TIME

Posted: at 10:39 pm

No need to attend festivals or buy expensive viewing gear to live some of the most moving virtual reality documentaries; in fact, many can be experienced from the comfort of ones living room provided you have a smartphone ideally of the latest generation and a good internet connection or data plan.

Though a headset, even a do-it-yourself cardboard one, is useful to block out your surroundings and immerse yourself more fully in the world, there is also something to be said about trying them as 360-degrees experiences. The juxtaposition between your world and the one on your device can create stirring moments. When a view of a destroyed street in Syria lines up with your hallway, it is hard not to project yourself and think of what it would feel like if you were to open your door to a war zone. It brings the story home.

Since the New York Times launched with fanfare their NYT VR app in November 2015 by sending out Google Cardboard viewers to over a million of their subscribers, several media organizations have followed suit. Many developed their own application (DiscoveryVR, LIFE VR, WSJ VR, for instance); some, like The Guardian with 6x9, built an app dedicated to a singular experience; while others, partner with existing VR companies.

But prominent members of the fourth estate are not the only ones creating compelling content. Tech companies, film studios and individuals are also using the latest innovations to share the stories that matter to them.

Here are a few of the most recent productions that have caught our attention.

Under the Cracked Sky by The New York Times On the edge of the world, at McMurdo Station in Antarctica, a group of researchers monitor life under the ice. Their job involves diving through a small hole into in frigid waters, the clearest in the world. Two of them, Rob Robbins and Steven Rupp, invite you to join them thanks to VR.

To give the impression that youre swimming with them rather than being carried by them, the New York Times team provided them with a customized underwater rig strapped to a nine feet pole. This way the diver handling it would recede in the background and, thus make way for majestic and unparalleled views of frozen seawater stalactites, ice caves and rocky black seabed. We told them: essentially youre swimming with a persons head down there, so act accordingly: avoid sudden movements, twisting and turning, or changing speed too quickly, explains Graham Roberts, one of the producers.

They recorded several dives over the course of one week, which were then edited into one mesmerizing and illuminating experience. Much of the time is spent gazing upwards, marveling at the light streaming through the ice while also considering how dangerous such a dive is the way in is also the only way out , and looking for the cheeky seals whose calls you can distinctly hear around you. It checked all the boxes for a VR project," adds Robert. It takes people somewhere they couldnt otherwise go to, it deals with an important topic, climate change, and it provided us with the opportunity to record unique imagery.

Time: 9 minutes App: NYT VR

The Protectors: Walk in the Rangers Shoes by National Geographic The numbers associated with elephant poaching are staggering. The Great Elephant Census recorded a population drop of 30% between 2007 and 2014 to just over 350,000 beasts. The decline is mostly due to poaching, which claims a life every 15 minutes. Simply put, at this rate, the large mammal could be extinct within the next 12 years.

To stop the massacre, park rangers risk their lives. In Garamba National Park in the Democratic Republic of Congo, for instance, nineteen of them have been killed in action over the past decade. These men are unassuming heroes and we wanted to tell their stories in a way that is multifaceted. As you journey into the savannah, youre also journeying deeper and deeper into their minds and psyches, says Imraan Ismail, who worked with Oscar-winning film director Kathryn Bigelow on this project.

Embedded with these wildlife watchmen, he filmed their daily lives from their time at home with loved ones, to their swift training and the tense patrols. He asked them about their relationship to the animals, to each other, and to their most often invisible enemies. The immersive experience, which at times was filmed by the rangers themselves as they trail elephants and go after poachers, gives you a sense of how unnerving it is to move through the bush when danger lurks all around.

Time: 8 minutes App: Within

Capturing Everest by LIFE VR Many have tried to convey what it is like to climb Mount Everest, the highest peak on earth. Adventurer Jon Krakauer described it in words in Into Thin Air, blind mountaineer Erik Weihenmayer had his ascent filmed for Farther than the Eye Can See, Liam Neeson narrated an IMAX documentary, and the list goes on. It was a matter of time before people took VR cameras to the top of the world. So, it comes as no surprise that the LIFE VR team, in association with Sports Illustrated, would try their hands at it too.

While the footage was already shot when it fell into their hands, theyre the ones who turned it into a mini-series that follows the adventure of Jeff Glasbrenner, who lost his leg in a tractor accident as a child, and Lisa Thompson, who was recovering from cancer. There was a lot to communicate: the inspiring journeys of Jeff and Lisa, the dangers associated with the climb, the long periods of waiting for the conditions to be favorable, the life at basecamp, the importance of the Sherpas, etc.," says Mia Tramz, Managing Editor at LIFE VR [LIFE VR is a Time Inc. company] . "Its a story about climbing Everest, but its also one about human nature."

Each of the four chapters focuses on a different challenge: getting ready, making it to basecamp, navigating the treacherous Khumbu Icefall and reaching the top. Everyone wants to see what its like to get to the summit," says Glasbrenner. "But, to me, the most representative scenes are those in the tents. You have to stay motivated while waiting for the conditions to allow you to continue. You miss your family and the comforts of home while also battling self-doubt." Though it helped his family and friends better understand exactly how much of a feat it was to reach the peak, he also acknowledges that some experiences are impossible to capture, especially how the lack of oxygen makes everything so much more difficult, even putting your shoes on is an effort.

Time: 4 episodes of approximately 9 minutes each App: LIFE VR

Step to the Line by Ricardo Laganaro with VR for Good Inside a California maximum-security prison, inmate and volunteers face one another. A facilitator from Defy Ventures, a training program for currently and formerly incarcerated Americans, asks those who relate to the statements she read to move forward. I heard gunshots in my neighborhood growing up: most prisoners take a step. Ive earned a four-year college degree: the tables turn. Ive done criminal things for which I could have been arrested, but did not get arrested: most of the people present step to the line and shake hands. The 360-degrees camera is set in the middle of the two rows putting you in the middle of this social experience.

This sets the stage for us to meet Trebian Tre Ward, one of the convicts. As Ricardo Laganaro, who took part in Oculus VR for Good initiative that paired filmmakers with non-profit organizations to explore immersive technologies promise to foster empathy, was developing the project, he realized that there are a lot of misconceptions regarding what it is like to be in prison. We think we know what its like because of all the movies," says the Brazilian artist. "But you dont, actually. Especially the cell, its really different from what you see in films that portray it as an empty space. In VR you can look around and see the wardrobe, the cabinet, and the belongings of the two inmates that share it. Theres a lot in there. My main goal was to provoke a transformation of the viewers opinion. I want him to move from being scared of the guy, to understanding a little bit of his past and current struggle, to cheering for him and thinking about the future, not what he did anymore. Mission accomplished.

Time: 11 minutes App: Facebook

Peoples House by Flix & Paul For all those who miss seeing Barack Obama in the White House, your prayers have been answered. Thanks to the Peoples House, a project by the Montreal-studio Flix & Paul, you can visit Americas most famous home with the 44th President and First Lady as your guides.

Thanks to a custom-designed robotic platform, it feels as if youre moving seamlessly through the different spaces as your prestigious docent share historical tidbits and personal anecdotes about 23 of the rooms.

Filmed over five days at the end of Obamas tenure, the immersive experience is an opportunity for the former First Family to reflect on their time at 1600 Pennsylvannia Avenue. You learn that Obamas first impression of the Oval Office was that it wasnt as big as I imagined it on television, while it took Michelle months to feel like she was at home rather than in a museum.

Time: 22 minutes App: GearVR and YouTube

Laurence Butet-Roch is a freelance writer, photo editor and photographer based in Toronto, Canada. She is a member of the Boreal Collective .

Link:

The 5 Virtual Reality Experiences to Try on Your Phone - TIME

Posted in Virtual Reality | Comments Off on The 5 Virtual Reality Experiences to Try on Your Phone – TIME

The next big leap in AI could come from warehouse robots – The Verge

Posted: at 10:39 pm

Ask Geordie Rose and Suzanne Gildert, co-founders of the startup Kindred, about their companys philosophy, and theyll describe a bold vision of the future: machines with human-level intelligence. Rose says these will be perhaps the most transformative inventions in history and they arent far away. More intriguing than this prediction is Kindreds proposed path for achieving it. Unlike some of the most cash-flush corporations in Silicon Valley, Kindred is focusing not on chatbots or game-playing programs, but on automating physical robots.

Gildert, a physicist who conceived Kindred in 2013 while working with Rose at quantum computing company D-Wave, thinks giving AI a physical body is the only way to make real progress toward a true thinking machine. If you want to build intelligence that conceptually thinks in the same way a human does it needs to have a similar sensory motor as humans do, Gildert says. The trick to achieving this, she thinks, is to train robots by having them collaborate with humans in the physical world. Rose, who co-founded D-Wave in 1999, stepped back from his role as chief technology officer to work on Kindred with Gildert.

Kindred wants to train robots by having them collaborate with humans in the physical world

The first step toward their new shared goal is an industrial warehouse robot called the Orb. Its a robotic arm that sits inside a hexagonal glass encasement, equipped with a bevy of sensors to help it see, feel, and even hear its surroundings. The arm is operated using a mix of human control and automated software. Because so many warehouse workers today spend a significant amount of time sorting products and scanning barcodes, Kindred developed a robotic arm that can do some elements automatically. Meanwhile, humans step in when needed to manually operate the robot to perform tasks that are difficult for machines, like gripping a single product from a cluster of different items.

Workers can even operate the arm remotely using an off-the-shelf HTC Vive headset and virtual reality motion controllers. It turns out that VR is great for gathering data on depth and other information humans intuitively use to grasp objects.

Kindred is now focused on getting its finished Orb into warehouses, where it can begin learning at an accelerated pace by sorting vastly different products and observing human operators. Because the company gathers data every time a human uses the Orb, engineers are able to improve its software over time using techniques such as reinforcement learning, which improves software through repetition. Down the line, the Orb should slowly take over more responsibility and, ideally, learn to perform new tasks.

But Kindreds ultimate goal is much more ambitious. It may sound counterintuitive, but Rose and Gildert think warehouses are the perfect place to start on the path toward human-level artificial intelligence. Because the US shipping marketplace is already rife with single-purpose robots, thanks in part to Amazon, there are plenty of opportunities for humans to train AI. Finding, handling, and sorting products while maneuvering in a fast-moving environment is a data gold mine for building robots that can operate in the real world.

Rose and Gildert believe the next generation of AI wont be in the form of a disembodied voice living in our phones. Rather, they believe the greatest strides will come from programs running inside a physical robot can gain knowledge about the world and itself from the ground up, like a human infant does from birth.

Kindreds is working toward whats known as artificial general intelligence, or software capable of performing any task a human being can do. Artificial general intelligence, or AGI, is sometimes referred to as strong or full AI because it exists in contrast to AI programs, like DeepMinds AlphaGo system, with very specific applications. Other more conventional forms of weak or narrow AI include the underlying software behind Netflix and Amazon recommendations, Snapchat camera effects that rely on facial recognition, and Googles fast and accurate language translations.

These algorithms are developed by applying deep learning techniques to large-scale neural networks until they can, say, differentiate between an image of a dog and a cat. They perform one task, or perhaps many in some cases, far better than humans can. But they are extremely limited and dont learn or adapt the way humans do. The software that recognizes a sunset cant predict whether youll like a Netflix movie or translate a sentence into Japanese. Right now, you cant ask AlphaGo to face off in chess it doesnt know the rules and wouldnt know how to begin learning them.

Kindred thinks our physical body is intrinsic to the secrets of human cognition

This is the fundamental challenge of AGI: how to create an intelligent system, the kind we know only from science fiction, that can truly learn on its own without needing to be fed thousands of examples and trained over the course of weeks or months.

The biggest names in AI research, like DeepMind, are focused on game-playing because it seems to be the most viable path forward. After all, if you can teach software to play Pong, perhaps it can take the lessons learned and apply them to Breakout? This applied knowledge approach, which mimics the way a human player can quickly intuit the rules of a new game, has proven promising.

For instance, AlphaGo Master, DeepMinds latest Go system that just bested world champion Ke Jie, now effectively teaches itself how to play better. One of the things were most excited about is not just that it can play Go better, but we hope that thisll actually lead to technologies that are more generally applicable to other challenging domains, DeepMind co-founder and CEO Demis Hassabis said at the event last week.

Yet for Kindreds founders, the quest to crack the secret of human cognition cant be separated from our physical bodies. Our founding belief was that in order to make real progress toward the original objectives of AI, you needed to start by grounding your ideas in the physical world, Rose says. And that means robots, and robots with sensors that can look around, touch, hear the world that surrounds them.

This body-first approach to AI is based on a theory called embodied cognition, which suggests that the interplay between our brain, body, and the physical world is what produces elements of consciousness and the ability to reason. (A fun exercise here is thinking about how many common metaphors have physical underpinnings, like thinking of affection as warmth or something inconceivable as being over your head.) Without understanding how the brain developed to control the body and guide functions like locomotion and visual processing, the theory goes, we may never be able to reproduce it artificially.

The body-first approach to AI is based on a theory called embodied cognition

Other than Kindred, work on AI and embodied cognition mostly happens in the research divisions of large tech companies and academia. For example, Pieter Abbeel, who leads development on the Berkeley Robot for the Elimination of Tedious Tasks (BRETT), aims to create robots that can learn much like young children do.

By giving its robot sensory abilities and motor functions and then using AI training techniques, the BRETT team devised a way for it to acquire knowledge and physical skills much faster than with standard programming and with the flexibility to keep learning. Much like how babies are constantly adjusting their behavior when attempting something new, BRETT also approaches unique problems, fails at first, and then adjusts over repeated attempts and under new constraints. Abbeels team even uses childrens toys to test BRETTs aptitude for problem solving.

OpenAI, the nonprofit funded by SpaceX and Tesla CEO Elon Musk, is working on both general purpose game-playing algorithms and robotics, under the notion that both avenues are complementary. Helping the team is Abbeel, who is on leave from Berkeley to help OpenAI make progress fusing AI learnings with modern robotics. The interesting thing about robotics is that it forces us to deal with the actual data we would want an intelligent agent to deal with, says Josh Tobin, a graduate student at Berkeley who works on robotics at OpenAI.

Applying AI to real-world tasks like picking up objects and stacking blocks involves tackling a whole suite of new problems, Tobin says, like managing unfamiliar textures and replicating minute motor movements. Solving them is necessary if were to ever deploy intelligent robots beyond factory floors.

Wojciech Zaremba, who leads OpenAIs robotics work, says that a holy grail of sorts would be a general-purpose robot powered by AI that can learn a new task scrambling eggs, for instance by watching someone do it just once. This is why OpenAI is working on teaching robots new skills that are first demonstrated by a human in a simulated VR environment, much like a video game, where its much easier and less costly to produce and collect data.

You could imagine that, as a final outcome, if its doable, you have files online of recordings of various tasks, Zaremba says. And then if you want the robot to replicate this behavior, you just download the file.

When I first operated the Orb, on an April afternoon in Kindreds San Francisco warehouse space, a group of six or so engineers were scattered about testing the robotic arms with various pink-colored bins of products vitamin bottles, soft plastic cylinders of Lysol cleaning wipes, rolls of paper towels.

The Orb is designed to help sort these objects in a large heap inside its glass container, while the arm sits affixed to the roof of the container. First, an operator wearing a VR headset moves the arm to a desired object, lowers the gripper, and adjusts the two clamps until a firm grip is established. Then the human can simply let go. Kindred has already automated the process of lifting the object in the air, scanning the barcode, and sorting it into the necessary bin.

Using the Orb resembles operating a video game version of a toy claw machine

In any gigantic warehouse, people have to walk around and pick up things, says George Babu, Kindreds chief product officer. The most efficient way to do that is to pick up a whole bunch of different things at the same time. Those go to someplace where you have them separated. Our robot does that job in the middle. The idea is that warehouse workers can dump a bunch of products into the Orb, while a remote operator works with the robot to sort them.

Amazon is working on something similar, and the company now holds an annual picking challenge to spur development in industrial robotics that are capable of handling and sorting physical items. Kindred is quick to recognize Amazons prowess in this department. In the fulfillment world, Amazon uses a different set of approaches than all of the other fulfillment provisioners. They have the scale, the scope, and the know-how to implement end-to-end systems that are very effective at what they do, Rose says. But he thinks Amazon is likely to keep this technology to itself. The advancements that Amazon makes toward doing this job well dont benefit all of their competitors.

Kindreds system, on the other hand, is designed to integrate into existing warehouse tools. Last month, Kindred finished its first deployable devices, and it created more demand than we anticipated, according to Jim Liefer, Kindreds chief operating officer, though he wont disclose any initial customers.

I was surprised when using the Orb, with a Vive headset, by just how much it resembles a video game. Think of a toy claw machine, where the second the clamp touches down on an object, the automated process takes over and the arm springs to life with an uncanny jerkiness. It makes sense, considering Kindred built its depth-sensing system using the game engine Unity.

Kindred imagines future versions of the Orb being affixed to sliding rails or bipedal roaming robots

Max Bennett, Kindreds robotics product manager, says that the process is designed so that human warehouse workers can operate multiple Orbs simultaneously, gripping objects and letting the software take the reins before cycling to the next setup. Kindred imagines future versions of the robotic arm being affixed to sliding overhead rails or maybe even to bipedal robots that roam the floor. There is also a point at which the Vive is no longer necessary. Nobodys going to want to use a VR headset all day, Bennett tells me, suggesting that an Xbox controller or even just a computer mouse will do in the future.

As for how the Orb might impact jobs, Babu says there will be need for human labor for quite some time. Hes partly right: Amazon hired 100,000 workers in the last year alone, and plans to hire 100,000 more this year, mostly in warehouse and other fulfillment roles. But systems like the Orb raise the possibility that fewer jobs are needed as the work becomes more a matter of assisting and operating robots.

My view is that the humans will all move on to different work in the stream, Babu says.

Still, Forrester Research predicts that automation will result in 25 million jobs lost over the next decade, with only 15 million new jobs created. The end goals of automation have always been to reduce costs and improve efficiency, and that will inevitably mean the disappearance of certain types of labor.

Kindred is unique in the AI field not just for its robotics focus, but also because its diving head first into the industrial world with a commercial product. Many of the big tech companies working on AI are doing so with huge research organizations, like Facebook AI Research and Google Brain. These teams are filled with academics and engineers who work on abstract problems that then help inform real software features that get deployed to millions of consumers.

Kindred, as a startup, cant afford this approach. Day one we said: Were going to find a big market. Were going to build a wildly successful product for that initial market, and build a business by executing along that path first with one vertical and then maybe others, Rose explains. He adds that his experience with D-Wave, which raised more than $150 million over the course of more than a decade just to release its first product, inspired him to seek out a different approach to tackling big-picture problems.

Gildert and Rose dont want to rely solely on venture capital funding to build Kindred

You have this quandary that doing it right is going to take a long time, on the order of decades. How do you sustain that organization for that length of time without all the negative side effects of raising a lot of rounds of VC? Rose says. The answer is that you have to create a real business that is cash-flow positive very early. Kindred has raised $15 million in funding thus far from Eclipse, GV, Data Collective, and a number of other investors. But Rose stresses that the companys focus is to become profitable with the Orb, and that will help it in its main objective.

That objective, since the beginning, is human-level AI with a focus on what Gildert calls in-body cognition, or the type of thought processes that only arise from giving AI a physical shell. Intelligence absence a body is not what we think it means, she says. Intelligence with a body brings to it a number of constraints that are not there when you think about intelligence in a virtual environment. We certainly dont believe you can build a chatbot without a human-like body and expect it to pass [for a human].

Brains evolved to control bodies, Rose adds. And all these things that we think about as being the beautiful stuff that comes from cognition, theyre all side effects of this.

See the rest here:

The next big leap in AI could come from warehouse robots - The Verge

Posted in Ai | Comments Off on The next big leap in AI could come from warehouse robots – The Verge

AI will be better than human workers at all tasks in 45 years, says Oxford University report – The Independent

Posted: at 10:39 pm

Experts believe artificial intelligence will be better than humans at all tasks within 45 years, according to a new report.

However, some think that could happen much sooner.

Researchers from the University of Oxford and Yale University have revealed the results of a study surveying a larger and more representative sample of AI experts than any study to date, and they will concern people working in a wide range of industries.

Their aim was to find out how long it would be before machines became better than humans at all tasks, with the researchers using the definition: High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.

According to their findings, AI will outperform humans in many activities in the near future, including translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053).

Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans, reads the report.

Asian respondents expect HLMI in 30 years, whereas North Americans expect it in 74 years.

10 per cent of the experts believe HLMIwill arrive within nine years.

The results of the studyecho comments made by Stephen Hawking and Elon Musk.

The real risk with AI isn't malice but competence, said Professor Hawking.

A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.

Mr Musk, meanwhile, has suggested that people could merge with machines in the future, in order to remain relevant.

Ray Kurzweil, a futurist and Googles director of engineering, has gone even further and predicted that the so-called singularity the moment when artificial intelligence exceeds man's intellectual capacity and creates a runaway effect, which many believe will lead to the demise of the human race is little over a decade away.

Forty-eight percent of respondents think that research on minimizing the risks of AI should be prioritized by society more than the status quo, the AI report adds.

The full text is available here.

Read the rest here:

AI will be better than human workers at all tasks in 45 years, says Oxford University report - The Independent

Posted in Ai | Comments Off on AI will be better than human workers at all tasks in 45 years, says Oxford University report – The Independent

Will China own the future of AI? – The Week Magazine

Posted: at 10:39 pm

Sign Up for

Our free email newsletters

In the 1982 film Firefox, Clint Eastwood plays an Air Force pilot and Vietnam vet on a secret mission to steal an advanced Soviet fighter jet. The airplane is super fast, radar invisible, and can be controlled by thought (as long as those thoughts are in Russian). "Yeah, I can fly it," Eastwood says. "I'm the best there is."

Two year later, Tom Clancy published The Hunt for Red October, later made into a film starring Alec Baldwin and Sean Connery. In this thriller, the revolutionary piece of Soviet technology is a super quiet nuclear submarine, almost undetectable by sonar.

Both pieces are fascinating Cold War artifacts playing off fears that the Soviet Union would manage a military version of Sputnik, leapfrogging U.S. tech and giving Moscow the decisive upper hand against the West. In reality, of course, the opposite was happening.

What if these two films were, as Hollywood puts it, "reimagined" for today's audiences? The tech MacGuffin would likely be Chinese artificial intelligence. Imagine Jason Bourne sneaking into China to download super intelligent software that would make that country's military and economy dominant. Or maybe he would kidnap a key Chinese computer scientist and bring him back stateside for interrogation.

Such a film would have an obvious "ripped from the headlines" feel about it. Specifically, headlines like this one from last weekend's New York Times: "Is China outsmarting America in AI?" Reporters John Mozur and John Markoff declare the "balance of power in technology is shifting" with China perhaps "only a step behind the United States" in artificial intelligence.

And as Beijing readies new multibillion dollar research initiatives, what is America doing? "China is spending more just as the United States cuts back," the Times journalists write. Indeed, the new Trump administration budget proposal would sharply reduce funding for U.S. government agencies responsible for federal AI research. For instance, the pieces notes, budget cuts could potentially reduce the National Science Foundation's spending on "intelligent systems" by 10 percent, to about $175 million.

It is unlikely that Congress would ever pass a budget with such draconian cuts, especially since wonks and policymakers on the left and right see basic science research as a proper and necessary role for government. Then again, Washington hasn't really been acting like science is an important national priority. As a share of the federal budget, basic science research has declined by two-thirds since the 1960s.

President Trump's proposed cuts are particularly striking since the just-departed Obama administration saw AI as critical technology with "incredible potential to help America stay on the cutting edge of innovation." Striking, but not surprising given that candidate Trump didn't even have a technology policy agenda. And what passed for an industrial strategy focused on reviving American steel manufacturing and coal mining. Perhaps America First doesn't really apply to science.

Of course, some conservative budgeteers advising the Trump White House argue that inefficient and speculative public investment "crowds out" private investment that is more likely to pay off in practical advances. But no one has apparently informed Eric Schmidt, executive chairman of Alphabet, the parent company of Google. The $700 billion tech giant, noted for its "moonshot" projects, is often held up an an example of how companies are where the really important research is done.

But in a recent Washington Post op-ed, Schmidt wrote that the "miracle machine" of American postwar innovation comes from the twin "interlocking engines" of the public and private sector. Without more public research investment, "we may wake up to find the next generation of technologies, industries, medicines, and armaments being pioneered elsewhere."

China is obviously far more capable of both invention and commercial application than the old Soviet Union. Its companies are already leaders in mobile tech. It's not hard to imagine why it would be better for U.S. workers to have America be the nation where the next generation of innovation is turned into amazing products and services. Plus, it would be odd for the world's leading military power not to also be the nation pushing the tech frontier. Certainly better us than an authoritarian nation that plans on using its advanced AI to enhance its ability to control its citizens, as well as enhance military capabilities.

America must spend more, maybe a lot more, on research. It should also do a better of job of attracting and keeping the world's best and brightest. Let's make sure this story has a happy ending.

Read the rest here:

Will China own the future of AI? - The Week Magazine

Posted in Ai | Comments Off on Will China own the future of AI? – The Week Magazine

Why AI is the new electricity – VentureBeat

Posted: at 10:39 pm

Two and a half years ago, President Obama called on the FCC to classify broadband internet as a utility. It joined a small club that you know well:electricity, gas, and running water. And now this club may be welcoming yet another new member. Is artificial intelligence the newest utility? Are we witnessing the dawn of AI being as ubiquitous asrunning water?

Six years ago, Netscape cofounder Marc Andreessen wrote thatsoftware is eating the world. Well, now AI is eating software.

You know the times are changing when AI is going from the subject of science fiction to the subject of an organization like the Financial Stability Board. Thats a global organization of central bankers who are responsible for the security of the worlds banking system. This is a critical task when we are facing an epidemic of data breaches.

Consider the attack that happened last February, when hackers managed to withdraw over $100 million from a Bangladesh bank account at the Federal Reserve Bank of New York. The fraudsters used malware to compromise a computer network, observed how transfers were done, and gained access to the banks credentials. This is a new kind of stealing in the digital age, where a heist of eight figures can happen in an instant, all with the unseen movement of 1s and 0s through cables and between satellites.

The infrastructure behind our global financial system is vulnerable. So if you ask me, attainable AI is here just in time. Thats what I told the FSB delegation when I stood up and presented a vision of AI like running water.

Its hard to imagine an industry that wont be transformed. Search. Health care. Law. Self-driving cars, of course. Even journalism.

The funny thing is, AI isnt even new. It was invented by a man named Arthur Samuel, who taught a computer to play checkers in 1962. But like Da Vincis flying machine, the idea of AI was born before the technology was in place tosupport it. AI was ahead of its time, and governments across the globe, from the U.S. to Japan, pulled its research funding. Thatled to an AI winter that lasted until recently.

But now that technology has caught up to our AI aspirations, we are seeing a rebirth of AI thanks to something Im calling the AI convergence. Its a perfect storm where multiple distinct threads of technology are coming together in a moment thats about to change everything. What are the threads of the AI convergence?

About a decade ago, Google innovated a method for computers to work in parallel, MapReduce, that introduced us to a new order of magnitude for processing power.

Before we just had one kind of processing unit: the CPU. Now we have a second: the GPU. Forrester analyst Mike Gualtieri called this a hardware renaissance, and its opened up a new dimension of computing for machine learning.

You are aware of Moores law, which describes the exponential growth for our capacity to store memory. Experts keep predicting that Moores law will have to slow down at some point. Its growth rate seems impossible. In fact, Moores law continues to this day, because we keep creating new ways of looking at data.

Big is an understatement. In the thousands of years between the dawn of humanity and the year 2003, humans created five exabytes of data. Now we create that much data every day. And our data footprint only continues to double every year.

Maybe you remember voice recognition software 15 years ago. It took three months of training before it could recognize your voice. Now this software can recognize any voice, instantly. The reason? We invented a better algorithm. Every time we do that, we have the power to model the universe with even more accuracy.

Add this all up and you have a cognitive revolution. The Industrial Revolution was the last time something like this happened, and it required huge physical resources: railroads, steel, and factories. But this time, with a small amount of money and access to attainable AI, small companies can upend capital-intensive industries in a way that was unthinkable before.

Take NuTonomy. Theyre a small startup of 100 employees out of MIT, and they beat Uber and Google to the coveted self-driving taxi market. NuTonomy taxis are driving around Singapore, and soon theyll be in business in Boston. Why? Access to AI.

For another example, I used to be an aerospace engineer, so I love to look at what SpaceX is up to. Could you imagine a private citizen joining the space race inthe 60s? In 2014, SpaceX released the total combined development costs for both its Falcon 9 launch vehicle and the Dragon capsule. The total was $846 million. As a comparison, NASA alone spent $38 billion on its comparable rocket, Orion. Thats 320 times the cost.

More than that, about half of SpaceXs funding came fromyou guessed it NASA! So not only did Elon Musk figure out how to do this cheaper, but NASA recognized this fact. If you cant beat them, fund them. This is the future: code over capital.

The Industrial Revolution sounded like the clanking of steel. This new cognitive revolution, powered by AI like running water, sounds more like a whoosh. Like water flowing through the pipes.

Nuno Sebastiao is the CEO of Feedzai,a data science company that detects fraud in omnichannel commerce.

Read more here:

Why AI is the new electricity - VentureBeat

Posted in Ai | Comments Off on Why AI is the new electricity – VentureBeat

Is AI the end of jobs or a new beginning? – Washington Post

Posted: at 10:39 pm

Artificial Intelligence (AI) is advancing so rapidly that even its developers are being caught off guard. Google co-founder Sergey Brin said in Davos, Switzerland, in January that it touches every single one of our main projects, ranging from search to photos to ads everything we do it definitely surprised me, even though I was sitting right there.

The long-promised AI, the stuff weve seen in science fiction, is coming and we need to be prepared. Today, AI is powering voice assistants such as Google Home, Amazon Alexa and Apple Siri, allowing them to have increasingly natural conversations with us and manage our lights, order food and schedule meetings. Businesses are infusing AI into their products to analyze the vast amounts of data and improve decision-making. In a decade or two, we will have robotic assistants that remind us of Rosie from The Jetsons and R2-D2 of Star Wars.

This has profound implications for how we live and work, for better and worse. AI is going to become our guide and companion and take millions of jobs away from people. We can deny this is happening, be angry or simply ignore it. But if we do, we will be the losers. As I discussed in my new book, Driver in the Driverless Car, technology is now advancing on an exponential curve and making science fiction a reality. We cant stop it. All we can do is to understand it and use it to better ourselves and humanity.

Rosie and R2-D2 may be on their way but AI is still very limited in its capability, and will be for a long time. The voice assistants are examples of what technologists call narrow AI: systems that are useful, can interact with humans and bear some of the hallmarks of intelligence but would never be mistaken for a human. They can, however, do a better job on a very specific range of tasks than humans can. I couldnt, for example, recall the winning and losing pitcher in every baseball game of the major leagues from the previous night.

Narrow-AI systems are much better than humans at accessing information stored in complex databases, but their capabilities exclude creative thought. If you asked Siri to find the perfect gift for your mother for Valentines Day, she might make a snarky comment but couldnt venture an educated guess. If you asked her to write your term paper on the Napoleonic Wars, she couldnt help. That is where the human element comes in and where the opportunities are for us to benefit from AI and stay employed.

In his book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, chess grandmaster Garry Kasparov tells of his shock and anger at being defeated by IBMs Deep Blue supercomputer in 1997. He acknowledges that he is a sore loser but was clearly traumatized by having a machine outsmart him. He was aware of the evolution of the technology but never believed it would beat him at his own game. After coming to grips with his defeat, 20 years later, he says fail-safes are required but so is courage.

Kasparov wrote: When I sat across from Deep Blue twenty years ago I sensed something new, something unsettling. Perhaps you will experience a similar feeling the first time you ride in a driverless car, or the first time your new computer boss issues an order at work. We must face these fears in order to get the most out of our technology and to get the most out of ourselves. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty, and joy. These are what truly make us human, not any particular activity or skill like swinging a hammer or even playing chess.

In other words, we better get used to it and ride the wave.

Human superiority over animals is based on our ability to create and use tools. The mental capacity to make things that improved our chances of survival led to a natural selection of better toolmakers and tool users. Nearly everything a human does involves technology. For adding numbers, we used abacuses and mechanical calculators and now spreadsheets. To improve our memory, we wrote on stones, parchment and paper, and now have disk drives and cloud storage.

AI is the next step in improving our cognitive functions and decision-making.

Think about it: When was the last time you tried memorizing your calendar or Rolodex or used a printed map? Just as we instinctively do everything on our smartphones, we will rely on AI. We may have forfeited skills such as the ability to add up the price of our groceries but we are smarter and more productive. With the help of Google and Wikipedia, we can be experts on any topic, and these dont make us any dumber than encyclopedias, phone books and librarians did.

A valid concern is that dependence on AI may cause us to forfeit human creativity. As Kasparov observes, the chess games on our smartphones are many times more powerful than the supercomputers that defeated him, yet this didnt cause human chess players to become less capable the opposite happened. There are now stronger chess players all over the world, and the game is played in a better way.

As Kasparov explains: It used to be that young players might acquire the style of their early coaches. If you worked with a coach who preferred sharp openings and speculative attacking play himself, it would influence his pupils to play similarly. What happens when the early influential coach is a computer? The machine doesnt care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. It is entirely free of prejudice and doctrine. The heavy use of computers for practice and analysis has contributed to the development of a generation of players who are almost as free of dogma as the machines with which they train.

Perhaps this is the greatest benefit that AI will bring humanity can be free of dogma and historical bias; it can do more intelligent decision-making. And instead of doing repetitive data analysis and number crunching, human workers can focus on enhancing their knowledge and being more creative.

See more here:

Is AI the end of jobs or a new beginning? - Washington Post

Posted in Ai | Comments Off on Is AI the end of jobs or a new beginning? – Washington Post

AI experts predict the future: Truck drivers out of jobs by 2027, surgeons by 2053 – ZDNet

Posted: at 10:39 pm

Timelines show 50 percent probability intervals for achieving various AI milestones.

Google has hung up its AlphaGo gloves after trouncing the world's best human Go players, but when will AI beat humans at other tasks, such as writing a best-selling novel or doing surgery?

To answer that question, a team of researchers led by Katja Grace of Oxford University's Future of Humanity Institute surveyed several hundred machine-learning experts to get their educated guess. The researchers used the responses to calculate the median number of years it would take for AI to reach key milestones in human capabilities.

Teachers may need to be on the alert for machine-written essays by 2026 and truck drivers could be made redundant by 2027, according to the results.

Meanwhile, AI will surpass human capabilities in retail by 2031. The experts also predict that AI will be capable of writing a best-seller by 2049, and doing a surgeon's work by 2053.

Overall, the respondents believe there is a 50 percent chance that AI beats humans at all tasks in 45 years and will automate all human jobs within 120 years.

The researchers invited the views of all 1,634 authors of papers published in 2015 at two of the leading machine-learning conferences, Neural Information Processing Systems and the International Conference on Machine Learning. A total of 352 researchers responded.

Interestingly, the researchers predict that AI won't beat the best human Go players until about 2028. As we know, Google beat Korean Go champ, Lee Sedol, in 2016, and just beat Chinese grandmaster Ke Ji. Google is now putting its AlphaGo developers from its DeepMind lab to work on solving bigger challenges to society.

But as Grace et al point out in the paper, the machine-learning experts were asked when AI could beat a human at Go on the condition that opponents had played or been trained on the same number of games.

"For reference, DeepMind's AlphaGo has probably played a hundred million games of self-play, while Lee Sedol has probably played 50,000," they note.

In fact, if the researchers' predictions are right, we're likely to see a two-legged robot beat humans in a 5km road race before AI beats a human Go player on equal terms.

The survey also asked the researchers about the likelihood of an AI "intelligence explosion", or the point at which AI becomes better than humans at AI design. As physicist Stephen Hawking explained, if that situation occurs, it could result in "machines whose intelligence exceeds ours by more than ours exceeds that of snails".

Specifically, researchers were asked about the chances of an intelligence explosion happening within two years of machines having learned to do every task better and more cheaply than humans. That is, within about 45 years.

Respondents overall see it as "possible but improbable", with a median probability of 10 percent. They also see it as likely to have positive outcomes but there is a five percent chance of an "extremely bad" outcome, like human extinction.

See the original post here:

AI experts predict the future: Truck drivers out of jobs by 2027, surgeons by 2053 - ZDNet

Posted in Ai | Comments Off on AI experts predict the future: Truck drivers out of jobs by 2027, surgeons by 2053 – ZDNet

US Falls Behind China & Canada In Advancing Healthcare With AI – Forbes

Posted: at 10:39 pm


Forbes
US Falls Behind China & Canada In Advancing Healthcare With AI
Forbes
The United States leads the world in artificial intelligence, but lags behind other countries in applying technical innovations to the field of healthcare. Globally, machine learning is used to increase efficiency, lower error rates, and decrease ...

Read more:

US Falls Behind China & Canada In Advancing Healthcare With AI - Forbes

Posted in Ai | Comments Off on US Falls Behind China & Canada In Advancing Healthcare With AI – Forbes