Page 102«..1020..101102103104..»

Category Archives: Singularity

Families Finally Hear From Completely Paralyzed Patients Via New Mind-Reading Device – Singularity Hub

Posted: February 13, 2017 at 9:37 am

Wendy was barely 20 years old when she received a devastating diagnosis: juvenile amyotrophic lateral sclerosis (ALS), an aggressive neurodegenerative disorder that destroys motor neurons in the brain and the spinal cord.*

Within half a year, Wendy was completely paralyzed. At 21 years old, she had to be artificially ventilated and fed through a tube placed into her stomach. Even more horrifyingly, as paralysis gradually swept through her body, Wendy realized that she was rapidly being robbed of ways to reach out to the world.

Initially, Wendy was able to communicate to her loved ones by moving her eyes. But as the disease progressed, even voluntary eye twitches were taken from her. In 2015, a mere three years after her diagnosis, Wendy completely lost the ability to communicateshe was utterly, irreversibly trapped inside her own mind.

Complete locked-in syndrome is the stuff of nightmares. Patients in this state remain fully conscious and cognitively sharp, but are unable to move or signal to the outside world that theyre mentally present. The consequences can be dire: when doctors mistake locked-in patients for comatose and decide to pull the plug, theres nothing the patients can do to intervene.

Now, thanks to a new system developed by an international team of European researchers, Wendy and others like her may finally have a rudimentary link to the outside world. The system, a portable brain-machine interface, translates brain activity into simple yes or no answers to questions with around 70 percent accuracy.

That may not seem like enough, but the system represents the first sliver of hope that we may one day be able to reopen reliable communication channels with these patients.

Four people were tested in the study, with some locked-in for as long as seven years. In just 10 days, the patients were able to reliably use the system to finally tell their loved ones not to worrytheyre generally happy.

The results, though imperfect, came as enormous relief to their families, says study leader Dr. Niels Birbaumer at the University of Tbingen. The study was published this week in the journal PLOS Biology.

Robbed of words and other routes of contact, locked-in patients have always turned to technology for communication.

Perhaps the most famous example is physicist Stephen Hawking, who became partially locked-in due to ALS. Hawkings workaround is a speech synthesizer that he operates by twitching his cheek muscles. Jean-Dominique Bauby, an editor of the French fashion magazine Elle who became locked-in after a massive stroke, wrote an entire memoir by blinking his left eye to select letters from the alphabet.

Recently, the rapid development of brain-machine interfaces has given paralyzed patients increasing access to the worldnot just the physical one, but also the digital universe.

These devices read brain waves directly through electrodes implanted into the patients brain, decode the pattern of activity, and correlate it to a commandsay, move a computer cursor left or right on a screen. The technology is so reliable that paralyzed patients can even use an off-the-shelf tablet to Google things, using only the power of their minds.

But all of the above workarounds require one critical factor: the patient has to have control of at least one muscleoften, this is a cheek or an eyelid. People like Wendy who are completely locked-in are unable to control similar brain-machine interfaces. This is especially perplexing since these systems dont require voluntary muscle movements, because they read directly from the mind.

The unexpected failure of brain-machine interfaces for completely locked-in patients has been a major stumbling block for the field. Although speculative, Birbaumer believes that it may be because over time, the brain becomes less efficient at transforming thoughts into actions.

Anything you want, everything you wish does not occur. So what the brain learns is that intention has no sense anymore, he says.

In the new study, Birbaumer overhauled common brain-machine interface designs to get the brain back on board.

First off was how the system reads brain waves. Generally, this is done through EEG, which measures certain electrical activity patterns of the brain. Unfortunately, the usual solution was a no-go.

We worked for more than 10 years with neuroelectric activity [EEG] without getting into contact with these completely paralyzed people, says Birbaumer.

It may be because the electrodes have to be implanted to produce a more accurate readout, explains Birbaumer to Singularity Hub. But surgery comes with additional risks and expenses to the patients. In a somewhat desperate bid, the team turned their focus to a technique called functional near-infrared spectroscopy (fNIRS).

Like fMRI, fNIRS measures brain activity by measuring changes in blood flow through a specific brain regiongenerally speaking, more blood flow equals more activation. Unlike fMRI, which requires the patient to lie still in a gigantic magnet, fNIRS uses infrared light to measure blood flow. The light source is embedded into a swimming cap-like device thats tightly worn around the patients head.

To train the system, the team started with facts about the world and personal questions that the patients can easily answer. Over the course of 10 days, the patients were repeatedly asked to respond yes or no to questions like Paris is the capital of Germany or Your husbands name is Joachim. Throughout the entire training period, the researchers carefully monitored the patients alertness and concentration using EEG, to ensure that they were actually participating in the task at hand.

The answers were then used to train an algorithm that matched the responses to their respective brain activation patterns. Eventually, the algorithm was able to tell yes or no based on these patterns alone, at about 70 percent accuracy for a single trial.

After 10 years [of trying], I felt relieved, says Birbaumer. If the study can be replicated in more patients, we may finally have a way to restore useful communication with these patients, he added in a press release.

The authors established communication with complete locked-in patients, which is rare and has not been demonstrated systematically before, says Dr. Wolfgang Einhuser-Treyer to Singularity Hub. Einhuser-Treyer is a professor at Bielefeld University in Germany who had previously worked on measuring pupil response as a means of communication with locked-in patients and was not involved in this current study.

With more training, the algorithm is expected to improve even further.

For now, researchers can average out mistakes by repeatedly asking a patient the same question multiple times. And even at an acceptable 70 percent accuracy rate, the system has already allowed locked-in patients to speak their mindsand somewhat endearingly, just like in real life, the answer may be rather unexpected.

One of the patients, a 61-year-old man, was asked whether his daughter should marry her boyfriend. The father said no a striking nine out of ten timesbut the daughter went ahead anyway, much to her fathers consternation, which he was able to express with the help of his new brain-machine interface.

Perhaps the most heart-warming result from the study is that the patients were generally happy and content with their lives.

We were originally surprised, says Birbaumer. But on further thought, it made sense. These four patients had accepted ventilation to support their lives despite their condition.

In a sense, they had already chosen to live, says Birbaumer. If we could make this technique widely clinically available, it could have a huge impact on the day-to-day lives of people with completely locked-in syndrome.

For his next steps, the team hopes to extend the system beyond simple yes or no binary questions. Instead, they want to give patients access to the entire alphabet, thus allowing them to spell out words using their brain wavessomething thats already been done in partially locked-in patients but never before been possible for those completely locked-in.

To me, this is a very impressive and important study, says Einhuser-Treyer. The downsides are mostly economical.

The equipment is rather expensive and not easy to use. So the challenge for the field will be to develop this technology into an affordable product that caretakers [sic], families or physicians can simply use without trained staff or extensive training, he says. In the interest of the patients and their families, we can hope that someone takes this challenge.

*The patient is identified as patient W in the study. Wendy is an alias.

Banner Image Credit: Shutterstock

Continue reading here:

Families Finally Hear From Completely Paralyzed Patients Via New Mind-Reading Device - Singularity Hub

Posted in Singularity | Comments Off on Families Finally Hear From Completely Paralyzed Patients Via New Mind-Reading Device – Singularity Hub

The fear of a technological singularity – ETtech.com

Posted: at 9:37 am

By Debkumar Mitra, Gray Matters

In 2016, a driverless Tesla car crashed killing the test driver. It was not the first vehicle to be involved in a fatal crash, but was the first of its kind and the tragedy opened a can of ethical dilemmas.

With autonomous systems such as driverless vehicles there are two main grey areas: responsibility and ethics. Widely discussed at various forums is a dilemma where a driverless car must choose between killing pedestrians or passengers.

Here, both responsibility and ethics are at play. The cold logic of numbers that define the mind of such systems can sway it either way and the fear is that passengers sitting inside the car have no control.

Any new technology brings a new set of challenges. But it appears that creating artificial intelligence-driven technology products is almost like unleashing the Frankensteins monster.

Artificial Intelligence (AI) is currently at the cutting-edge science and technology. Advances in technology, including aggregate technologies like deep learning and artificial neural networks, are behind many new developments such as that Go playing world champion machine.

However, though there is great positive potential for AI, many are afraid of what AI could do, and rightfully so. There is still the fear of a technological singularity, a circumstance in which AI machines would surpass the intelligence of humans and take over the world.

Researchers in genetic engineering also face a similar question. This dark side of technology, however, should not be used to decree closure of all AI or genetics research. We need to create a balance between human needs and technological aspirations.

Much before the current commotion over ethical AI technology, celebrated science-fiction author Isaac Asimov came up with his laws of robotics.

Exactly 75 years ago in a 1942 short story Runaround, Asimov unveiled an early version of his laws. The current forms of the laws are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Given the pace at which AI systems are developing, there is an urgent need to put in some checks and balances so that things do not go out of hand.

There are many organisations now looking at legal, technical, ethical and moral aspects of a society driven by AI technology. The Institute of Electrical and Electronics Engineers (IEEE) already has Ethically Aligned Designed, an AI framework addressing the issues in place. AI researchers are drawing up a laundry list similar to Asimovs laws to help people engage in a more fearless way with this beast of a technology.

In January 2017, Future of Life Institute (FLI), a charity and outreach organisation, hosted their second Beneficial AI Conference. AI experts developed Asilomar AI Principles, which ensures that AI remains beneficial and not harmful to the future of humankind.

The key points that came out of the conference are: How can we make future AI systems robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining peoples resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have?

Ever since they unshackled the power of the atom, scientists and technologists have been at the forefront of the movement emphasising science for the betterment of man. This duty was forced upon them when the first atom bomb was manufactured in the US. Little did they realise that a search for the atomic structure could give rise to nasty subplot? With AI we are at the same situation or maybe worse.

No wonder at an IEEE meeting that gave birth to ethical AI framework, the dominant thought was that the human and all living beings must remain at centre of all AI discussions. People must be informed at every level right from the design stage to development of the AI-driven products for everyday use.

While it is a laudable effort to develop ethically aligned technologies, it begs another question that has been raised at various AI conferences. Are humans ethical?

(The author is the CEO of Gray Matters. Views expressed above are his own)

Originally posted here:

The fear of a technological singularity - ETtech.com

Posted in Singularity | Comments Off on The fear of a technological singularity – ETtech.com

Ready to Change the World? Apply Now for Singularity University’s 2017 Global Solutions Program – Singularity Hub

Posted: February 11, 2017 at 8:41 am

Im putting out a call for brilliant entrepreneurs who want to enroll in Singularity Universitys Global Solutions Program (GSP).

The GSP is where youll learn about exponentially growing technology, dig into humanitys Global Grand Challenges (GGCs) and then start a new company, product or service with the goal of positively impacting 1 billion people within 10 years.

We call this a ten-to-the-ninth (10+) company.

This post is about who should apply, how to apply and the over $1.5 million in scholarships being provided by Google for entrepreneurs.

SUs GSP program runs from June 17, 2017 until August 19, 2017.

Applications are due: February 21, 2017.

Eight years ago, Ray Kurzweil and I cofounded Singularity University to search the world for the most brilliant, world-class problem-solvers, to bring them together, and to give them the resources to create companies that impact billions.

The GSP is an intensive 24/7 experience at the SU campus at the NASA Research Center in Mountain View, CA, in the heart of Silicon Valley.

During the nine-week program, 90 entrepreneurs, engineers, scientists, lawyers, doctors and innovators from around the world learn from our expert faculty about infinite computing, AI, robotics, 3D printing, networks/sensors, synthetic biology, entrepreneurship, and more, and focus on building and developing companies to solve the global grand challenges (GGCs).

GSP participants form teams to develop unique solutions to GGCs, with the intent to form a company that, as I mentioned above, will positively impact the lives of a billion people in 10 years or less.

Over the course of the summer, participants listen to and interact with top Silicon Valley executive guest speakers, tour facilities like GoogleX, and spend hours getting their hands dirty in our highly advanced maker workshop.

At the end of the summer, the best of these startups will be asked to join SU Labs, where they will receive additional funding and support to take the company to the next level.

I am pleased to announce that thanks to a wonderful partnership with Google, all successful applicants will be fully subsidized by Google to participate in the program.

In other words, if accepted into the program, the GSP is free.

The Global Solutions Program (GSP) is SUs flagship program for innovators from a wide diversity of backgrounds, geographies, perspectives, and expertise. At GSP, youll get the mindset, tools, and network to help you createmoonshotinnovations that will positively transform the future of humanity. If you're looking to create solutions to help billions of people, we can help you do just that.

Key program dates:

This program will be unlike any we've ever doneand unlike any you've ever seen.

If you feel like you meet the criteria, apply now (click here).

Applications close February 21nd, 2017.

If you know of a friend or colleague who would be a good fit for this program, please share this post with them and ask that theyfill out an application.

See the article here:

Ready to Change the World? Apply Now for Singularity University's 2017 Global Solutions Program - Singularity Hub

Posted in Singularity | Comments Off on Ready to Change the World? Apply Now for Singularity University’s 2017 Global Solutions Program – Singularity Hub

How Robots Helped Create 100,000 Jobs at Amazon – Singularity Hub – Singularity Hub

Posted: at 8:41 am

Accelerating technology has been creating a lot of worry over job loss to automation, especially as machines become capable of doing things they never could in the past. A recent report released by the McKinsey Global Institute estimated that 49 percent of job activities could currently be fully automatedthat equates to 1.1 billion workers globally.

What gets less buzz is the other side of the coin: automation helping to create jobs. Believe it or not, it does happen, and we can look at one of the worlds largest retailers to see that.

Thanks in part to more robots in its fulfillment centers, Amazon has been able to drive down shipping costs and pass those savings on to customers. Cheaper shipping made more people use Amazon, and the company hired more workers to meet this increased demand.

So what do the robots do, and what do the people do?

Tasks involving fine motor skills, judgmentor unpredictability are handled by people. They stock warehouse shelves with items that come off delivery trucks. A robot could do this, except that to maximize shelf space, employees are instructed to stack items according to how they fit on the shelf rather than grouping them by type.

Robots can only operate in a controlled environment, performing regular and predictable tasks. Theyve largely taken over heavy lifting, including moving pallets between shelvesgood news for warehouse workers backsas well as shuttling goods from one end of a warehouse to another.

Under current technology, the expense of building robots able to stock shelves based on available space is more costly and less logical than hiring people to do it.

Similarly, for outgoing orders, robots do the lifting and transportation, but not the selecting or packing. A robot brings an entire shelf of goods to an employees workstation, where the employee selects the correct item and puts it on a conveyor belt for another employee to package. By this time, the shelf-carrying robot is already returning the first shelf and retrieving another.

Since loading trucks also requires spatial judgment and can be unpredictablespace must be maximized here even more than on shelvespeople take care of this too.

Ever since acquiring Boston-based robotics company Kiva Systemsin March 2012at a price tag of $775 millionAmazon has been ramping up its use of robots and is continuing to pour funds into automation research, both for robots and delivery drones.

In 2016 the company grew its robot workforce by 50 percent, from 30,000 to 45,000. Far from laying off 15,000 people, though, Amazon increased human employment by around 50 percent in the same period of time.

Even better, the companys Q4 2016 earnings report included the announcement that it plans to create more than 100,000 new full-time, full-benefit jobs in the US over the next 18 months. New jobs will be based across the country and will include various types of experience, education, and skill levels.

So how tight is the link between robots and increased productivity? Would there be even more jobs if people were doing the robots work?

Well, picture an employee walking (or even running) around a massive warehouse, locating the right shelf, climbing a ladder to reach the item hes looking for, grabbing it, climbing back down the ladder (carefully, of course), and walking back to his work station to package it for shipping. Now multiply the time that whole process took by the hundreds of thousands of packages shipped from Amazon warehouses each day.

Lots more time. Lots less speed. Fewer packages shipped. Higher costs. Lower earnings. No growth.

Though it may not last forever, right now Amazons robot-to-human balance is clearly in employees favor. Automation can take jobs away, but sometimes it can create them too.

Image Credit: Tabletmonkeys/YouTube

The rest is here:

How Robots Helped Create 100,000 Jobs at Amazon - Singularity Hub - Singularity Hub

Posted in Singularity | Comments Off on How Robots Helped Create 100,000 Jobs at Amazon – Singularity Hub – Singularity Hub

Rowe FTC robotics team RSF Singularity takes top honors at Championship – Rancho Santa Fe Review

Posted: February 10, 2017 at 3:33 am

On Saturday, Feb. 4, at The Grauer School in Encinitas, the three Rowe FTC robotics teams -- Singularity, Logitechies and Intergalactic Dragons -- ended up being 1st, 2nd and 3rd place captains in the League Championships exciting alliance rounds which culminated the end of a hard-fought event. David Warner, who heads the schools FTC robotics program, said, Im so proud of our students, parent mentors and coaches who worked countless hours to achieve success. Being the youngest teams at the championship, this is truly remarkable and a testament to their hard work!

The Logitechies and Intergalactic Dragons alliance teams faced off in an exciting third game to determine who would move on to face the Singularity alliance in the championship round. The Intergalactic Dragons won, but when they moved on to the final match to determine the champion, Singularitys 90-point autonomous program was the key to victory as their alliance put up well over 200 points in the two final games.

In addition to competing in the alliance matches, the Logitechies team was also a finalist in the judged Connect and PTC awards.

Singularity earned top honors of the day as they advance, along with the Intergalactic Dragons, to the San Diego Regionals at Francis Parker High School on Feb. 25.

Go here to read the rest:

Rowe FTC robotics team RSF Singularity takes top honors at Championship - Rancho Santa Fe Review

Posted in Singularity | Comments Off on Rowe FTC robotics team RSF Singularity takes top honors at Championship – Rancho Santa Fe Review

Physicists Unveil Blueprint for a Quantum Computer the Size of a … – Singularity Hub

Posted: at 3:33 am

Quantum computers promise to crack some of the worlds most intractable problems by super-charging processing power. But the technical challenges involved in building these machines mean theyve still achieved just a fraction of what they are theoretically capable of.

Now physicists from the UK have created a blueprint for a soccer-field-sized machine they say could reach the blistering speeds that would allow them to solve problems beyond the reach of todays most powerful supercomputers.

The system is based on a modular design interlinking multiple independent quantum computing units, which could be scaled up to almost any size. Modular approaches have been suggested before, but innovations such as a far simpler control system and inter-module connection speeds 100,000 times faster than the state-of-the-art make this the first practical proposal for a large-scale quantum computer.

For many years, people said that it was completely impossible to construct an actual quantum computer. With our work we have not only shown that it can be done, but now we are delivering a nuts and bolts construction plan to build an actual large-scale machine, professor Winfried Hensinger, head of the Ion Quantum Technology Group at the University of Sussex who led the research, said in a press release.

The technology at the heart of the individual modules is already well-established and relies on trapping ions (charged atoms) in magnetic fields to act as qubits, the basic units of information in quantum computers.

While bits in conventional computers can have a value of either 1 or 0, qubits take advantage of the quantum mechanical phenomena of superposition, which allows them to be both at the same time.

As Elizabeth Gibney explains in Nature, this is what makes quantum computers so incredibly fast. The set of qubits comprising the memory of a quantum computer could exist in every possible combination of 1s and 0s at once. Where a classical computer has to try each combination in turn, a quantum computer could process all those combinations simultaneously.

In a paper published in the journal Science Advances last week, researchers outline designs for modules containing roughly 2,500 qubits and suggest interlinking thousands of them together to create a machine containing two billion qubits. For comparison, Canadian firm D-Wave, the only commercial producer of quantum computers, just brought out its latest model featuring 2,000 qubits.

This is not the first time a modular system like this has been suggested, but previous approaches have recommended using light waves traveling through fiberoptics to link the units. This results in interaction rates between modules far slower than the quantum operations happening within them, putting a handbrake on the systems overall speed. In the new design, the ions themselves are shuttled from one module to another using electrical fields, which results in 100,000 times faster connection speeds.

The system also has a much simpler way of controlling qubits. Previous designs required lasers to be carefully targeted at each ion, an enormous engineering challenge when dealing with billions of qubits. Instead, the new system uses microwave fields and the careful application of voltages, which is much easier to scale up.

The researchers concede there are still considerable technical challenges to building a device on the scale they have suggested, not to mention the cost. But they have already announced plans to build a prototype based on the design at the university at a cost of 1-2 million.

While this proposal is incredibly challenging, I wish more in the quantum community would think big like this, Christopher Monroe, a physicist at the University of Maryland who has worked on trapped-ion quantum computing, told Nature.

In their paper, the researchers predict their two billion qubit system could find the prime factors of a 617-digit-long number in 110 days. This is significant because many state-of-the-art encryption systems rely on the fact that factoring large numbers can take conventional computers thousands of years. This is why many in the cybersecurity world are nervous about the advent of quantum computing.

These researchers arent the only ones working on bringing quantum computing into the real world, though. Google, Microsoft and IBM are all developing their own systems, and D-Wave recently open-sourced a software tool that helps those without a background in quantum physics program its machines.

All that interest is due to the enormous potential of quantum computing to solve problems as diverse and complex as developing drugs for previously incurable diseases, devising new breeds of materials for high-performance superconductors, magnets and batteries and even turbo-charging machine learning and artificial intelligence.

"The availability of a universal quantum computer may have a fundamental impact on society as a whole, said Hensinger. Without doubt it is still challenging to build a large-scale machine, but now is the time to translate academic excellence into actual application, building on the UK's strengths in this ground-breaking technology.

Image Credit: University of Sussex/YouTube

See the rest here:

Physicists Unveil Blueprint for a Quantum Computer the Size of a ... - Singularity Hub

Posted in Singularity | Comments Off on Physicists Unveil Blueprint for a Quantum Computer the Size of a … – Singularity Hub

Singularity Containers for Science, Reproducibility, and HPC – Linux.com (blog)

Posted: at 3:33 am

Singularity Containers for Science, Reproducibility, and HPC
Linux.com (blog)
Explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators ) allowing users to take full control to set-up and run in their native environments. This talk explores ...

Link:

Singularity Containers for Science, Reproducibility, and HPC - Linux.com (blog)

Posted in Singularity | Comments Off on Singularity Containers for Science, Reproducibility, and HPC – Linux.com (blog)

Robot Cars Can Teach Themselves How to Drive in Virtual Worlds – Singularity Hub

Posted: February 9, 2017 at 6:30 am

Over the holidays, I went for a drive with a Tesla. With, not in, because the car was doing the driving.

Hearing about autonomous vehicles is one thing; experiencing it was something entirely different. When the parked Model S calmly drove itself out of the garage, I stood gaping in awe, completely mind-blown.

If this years Consumer Electronics Show is any indication, self-driving cars are zooming into our lives, fast and furious. Aspects of automation are already in useTeslas Autopilot, for example, allows cars to control steering, braking and switching lanes. Elon Musk, CEO of Tesla, has gone so far as to pledge that by 2018, you will be able to summon your car from across the countryand itll drive itself to you.

So far, the track record for autonomous vehicles has been fairly impressive. According to a report from the National Highway Traffic Safety Administration, Teslas crash rate dropped by about 40% after turning on their first-generation Autopilot system. This week, with the introduction of gen two to newer cars equipped with the necessary hardware, Musk is aiming to cut the number of accidents by another whopping 50 percent.

But when self-driving cars mess up, we take note. Last year, a Tesla vehicle slammed into a white truck while Autopilot was engagedapparently confusing it with the bright, white skyresulting in the companys first fatality.

So think about this: would you entrust your life to a robotic machine?

For anyone to even start contemplating yes, the cars have to be remarkably safe fully competent in day-to-day driving, and able to handle any emergency traffic throws their way.

Unfortunately, those edge cases also happen to be the hardest problems to solve.

To interact with the world, autonomous cars are equipped with a myriad of sensors. Googles button-nosed Waymo car, for example, relies on GPS to broadly map out its surroundings, then further captures details using its cameras, radar and laser sensors.

These data are then fed into software that figures out what actions to take next.

As with any kind of learning, the more scenarios the software is exposed to, the better the self-driving car learns.

Getting that data is a two-step process: first, the car has to drive thousands of hours to record its surroundings, which are used as raw data to build 3D maps. Thats why Google has been steadily taking their cars out on field tripssome two million miles to datewith engineers babysitting the robocars to flag interesting data and potentially take over if needed.

This is followed by thousands of hours of labelingthat is, manually annotating the maps to point out roads, vehicles, pedestrians and other subjects. Only then can researchers feed the dataset, so-called labeled data, into the software for it to start learning the basics of a traffic scene.

The strategy works, but its agonizingly slow, tedious and the amount of experience that the cars get is limited. Since emergencies tend to fall into the category of unusual and unexpected, it may take millions of miles before the car encounters dangerous edge cases to test its softwareand of course, put both car and human at risk.

An alternative, increasingly popular approach is to bring the world to the car.

Recently, Princeton researchers Ari Seff and Jianxiong Xiao realized that instead of manually collecting maps, they could tap into a readily available repertoire of open-sourced 3D maps such as Google Street View and OpenStreetMap. Although these maps are messy and in some cases can have bizarre distortions, they offer a vast amount of raw data that could be used to construct datasets for training autonomous vehicles.

Manually labeling that data is out of the question, so the team built a system that can automatically extract road featuresfor example, how many lanes there are, if theres a bike lane, what the speed limit is and whether the road is a one-way street.

Using a powerful technique called deep learning, the team trained their AI on 150,000 Street View panoramas, until it could confidently discard artifacts and correctly label any given street attribute. The AI performed so well that it matched humans on a variety of labeling tasks, but at much faster speed.

The automated labeling pipeline introduced here requires no human intervention, allowing it to scale with these large-scale databases and maps, concluded the authors.

With further improvement, the system could take over the labor-intensive job of labeling data. In turn, more data means more learning for autonomous cars and potentially much faster progress.

This would be a big win for self-driving technology, says Dr. John Leonard, a professor specializing in mapping and automated driving at MIT.

Other researchers are eschewing the real world altogether, instead turning to hyper-realistic gaming worlds such as Grand Theft Auto V.

For those not in the know, GTA V lets gamers drive around the convoluted roads of a city roughly one-fifth the size of Los Angeles. Its an incredibly rich world the game boasts 257 types of vehicles and 7 types of bikes that are all based on real-world models. The game also simulates half a dozen kinds of weather conditions, in all giving players access to a huge range of scenarios.

Its a total data jackpot. And researchers are noticing.

In a study published in mid-2016, Intel Labs teamed up with German engineers to explore the possibility of mining GTA V for labeled data. By looking at any road scene in the game, their system learned to classify different objects in the roadcars, pedestrians, sidewalks and so onthus generating huge amounts of labeled data that can then be fed to self-driving cars.

Of course, datasets extracted from games may not necessarily reflect the real world. So a team from the University of Michigan trained two algorithms to detect vehicles one using data from GTA V, the other using real-world imagesand pitted them against each other.

The result? The game-trained algorithm performed just as well as the one trained with real-life images, although it needed about 100 times more training data to reach the performance of the real-world algorithmnot a problem, since generating images in games is quick and easy.

But its not just about datasets. GTA V and other hyper-realistic virtual worlds also allow engineers to test their cars in uncommon but highly dangerous scenarios that they may one day encounter.

In virtual worlds, AIs can tackle a variety of traffic hazardssliding on ice, hitting a wall, avoiding a deerwithout worry. And if the cars learn how to deal with these edge cases in simulations, they may have a higher chance of surviving one in real life.

So far, none of the above systems have been tested on physical self-driving cars.

But with the race towards full autonomy at breakneck speed, its easy to see companies incorporating these systems to give themselves an upper edge.

Perhaps more significant is that these virtual worlds represent a subtle shift towards the democratization of self-driving technology. Most of them are open-source, in that anyone can hop onboard to create and test their own AI solutions for autonomous cars.

And who knows, maybe the next big step towards full autonomy wont be made inside Tesla, Waymo, or any other tech giant.

It could come from that smart kid next door.

Image Credit: Shutterstock

See more here:

Robot Cars Can Teach Themselves How to Drive in Virtual Worlds - Singularity Hub

Posted in Singularity | Comments Off on Robot Cars Can Teach Themselves How to Drive in Virtual Worlds – Singularity Hub

Wearable Devices Can Actually Tell When You’re About to Get Sick – Singularity Hub

Posted: February 7, 2017 at 10:39 pm

Feeling run down? Have a case of the sniffles? Maybe you should have paid more attention to your smartwatch.

No, thats not the pitch line for a new commercial peddling wearable technology, though no doubt a few companies will be interested in the latest research published in PLOS Biology for the next advertising campaign. It turns out that some of the data logged by our personal tracking devices regarding healthheart rate, skin temperature, even oxygen saturationappear useful for detecting the onset of illness.

We think we can pick up the earliest stages when people get sick, says Michael Snyder, a professor and chair of genetics at Stanford University and senior author of the study, Digital Health: Tracking Physiomes and Activity Using Wearable Biosensors Reveals Useful Health-Related Information.

Snyder said his team was surprised that the wearables were so effective in detecting the start of the flu, or even Lyme disease, but in hindsight the results make sense: Wearables that track different parameters such as heart rate continuously monitor each vital sign, producing a dense set of data against which aberrations stand out even in the least sensitive wearables.

[Wearables are] pretty powerful because theyre a continuous measurement of these things, notes Snyder during an interview with Singularity Hub.

The researchers collected data for up to 24 months on a small study group, which included Snyder himself. Known as Participant #1 in the paper, Snyder benefited from the study when the wearable devices detected marked changes in his heart rate and skin temperature from his normal baseline. A test about two weeks later confirmed he had contracted Lyme disease.

In fact, during the nearly two years while he was monitored, the wearables detected 11 periods with elevated heart rate, corresponding to each instance of illness Snyder experienced during that time. It also detected anomalies on four occasions when Snyder was not feeling ill.

An expert in genomics, Snyder said his team was interested in looking at the effectiveness of wearables technology to detect illness as part of a broader interest in personalized medicine.

Everybodys baseline is different, and these devices are very good at characterizing individual baselines, Snyder says. I think medicine is going to go from reactivemeasuring people after they get sickto proactive: predicting these risks.

Thats essentially what genomics is all about: trying to catch disease early, he notes. I think these devices are set up for that, Snyder says.

The cost savings could be substantial if a better preventive strategy for healthcare can be found. A landmark report in 2012 from the Cochrane Collaboration, an international group of medical researchers, analyzed 14 large trials with more than 182,000 people. The findings: Routine checkups are basically a waste of time. They did little to lower the risk of serious illness or premature death. A news story in Reuters estimated that the US spends about $8 billion a year in annual physicals.

The study also found that wearables have the potential to detect individuals at risk for Type 2 diabetes. Snyder and his co-authors argue that biosensors could be developed to detect variations in heart rate patterns, which tend to differ for those experiencing insulin resistance.

Finally, the researchers also noted that wearables capable of tracking blood oxygenation provided additional insights into physiological changes caused by flying. While a drop in blood oxygenation during flight due to changes in cabin pressure is a well-known medical fact, the wearables recorded a drop in levels during most of the flight, which was not known before. The paper also suggested that lower oxygen in the blood is associated with feelings of fatigue.

Speaking while en route to the airport for yet another fatigue-causing flight, Snyder is still tracking his vital signs today. He hopes to continue the project by improving on the software his team originally developed to detect deviations from baseline health and sense when people are becoming sick.

In addition, Snyder says his lab plans to make the software work on all smart wearable devices, and eventually develop an app for users.

I think [wearables] will be the wave of the future for collecting a lot of health-related information. Its a very inexpensive way to get very dense data about your health that you cant get in other ways, he says. I do see a world where you go to the doctor and theyve downloaded your data. Theyll be able to see if youve been exercising, for example.

It will be very complementary to how healthcare currently works.

Image Credit: Shutterstock

Go here to read the rest:

Wearable Devices Can Actually Tell When You're About to Get Sick - Singularity Hub

Posted in Singularity | Comments Off on Wearable Devices Can Actually Tell When You’re About to Get Sick – Singularity Hub

Video: Singularity Containers for Science, Reproducibility, and HPC – insideHPC

Posted: at 10:39 pm

Greg Kurtzer, LBNL

Explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators ) allowing users to take full control to set-up and run in their native environments. This talk explores Singularity how it combines software packaging models with minimalistic containers to create very lightweight application bundles which can be simply executed and contained completely within their environment or be used to interact directly with the host file systems at native speeds. A Singularity application bundle can be as simple as containing a single binary application or as complicated as containing an entire workflow and is as flexible as you will need.

Gregory M. Kurtzer is currently the IT HPC Systems Architect and Technology Developer at Lawrence Berkeley National Laboratory. His specialties include Linux (environment, services and deep system internals), open source and development (Perl, C, SQL, PHP, HTML, etc.); HPC applications, administration, automation and provisioning of large scale system architectures. Along with his solid reputation for sparking new trends, Kurtzerhas created, founded, built and contributed to communities with install counts in the millions of users, and numerous breakthrough projects including CentOS Linux, Caos Linux, Perceus, Warewulf and most recently Singularity.

See more talks in the Stanford Conference Video Gallery

Sign up for our insideHPC Newsletter

Read more from the original source:

Video: Singularity Containers for Science, Reproducibility, and HPC - insideHPC

Posted in Singularity | Comments Off on Video: Singularity Containers for Science, Reproducibility, and HPC – insideHPC

Page 102«..1020..101102103104..»