Page 150«..1020..149150151152..160170..»

Category Archives: Artificial Intelligence

What an artificial intelligence researcher fears about AI – San Francisco Chronicle

Posted: July 14, 2017 at 5:13 am

(The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.)

Arend Hintze, Michigan State University

(THE CONVERSATION) As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

This article was originally published on The Conversation. Read the original article here: http://theconversation.com/what-an-artificial-intelligence-researcher-fears-about-ai-78655.

Go here to read the rest:

What an artificial intelligence researcher fears about AI - San Francisco Chronicle

Posted in Artificial Intelligence | Comments Off on What an artificial intelligence researcher fears about AI – San Francisco Chronicle

India’s Infosys eyes artificial intelligence profits – Phys.Org

Posted: at 5:13 am

July 14, 2017

Indian IT giant Infosys said Friday that artificial intelligence was key to future profits as it bids to satisfy clients' demands for innovative new technologies.

India's multi-billion-dollar IT outsourcing sector has long been one of the country's flagship industries. But as robots and automation grow in popularity its companies are under pressure to reinvent themselves.

"We are revealing new growth with services that we (have been) focusing on for the past couple of years includingAI (artificial intelligence) and cloud computing," said Infosys chief executive Vishal Sikka, announcing a small rise in quarterly profits.

"Going forward, we will count on strong growth coming from these services," added Sikka, who signalled his intent by arriving at the press conference in a driverless golf cart.

Infosys reported an increase of 1.4 percent in consolidated net profit year-on-year for the first quarter, marginally beating analysts' expectations.

Net profit in the three months to June 30 came in at 34.83 billion rupees (540 million), marginally above the 34.36 billion rupees it reported in the same period last year, Infosys said.

India's $150-billion IT sector is facing upheaval in the face of automation and US President Donald Trump's clampdown on visas, with reports of mass redundancies.

Industry body Nasscom recently called on companies to teach employees new skills after claims they had failed to keep up with new technologies.

In April Infosys launched a platform called Nia to "help clients embrace AI".

"Nia continues to be central to all our conversations with clients as we work with them to transform their businesses," the company said in its earnings statement Friday.

Analysts surveyed by Bloomberg had expected profits of 34.3 billion rupees.

Infosys announced revenues of 170.78 billion rupees, marginally up from the 167.8 billion rupees reported for the same period last year.

Its shares rose nearly 3 percent in early trade after the company forecast revenue growth of between 6.5 to 8.5 percent for the current financial year.

Explore further: India's TCS profits fall by 6 percent

2017 AFP

India's largest IT services firm Tata Consultancy Services reported a nearly 6 percent fall in quarterly earnings Thursday owing to a strengthening rupee, the company said.

Indian software giant Infosys cut its annual earnings outlook for the second time in just three months Friday, sending shares down almost three percent, as cautious clients rein in spending.

Indian software giant Infosys Technologies reported a five percent rise in quarterly net profits on Tuesday, aided by a weak rupee and strong demand from the United States.

Infosys shares plunged more than nine percent on the Bombay Stock Exchange Friday after the Indian software giant cut its earnings outlook for the year.

Indian software giant Infosys announced Friday a better-than-expected 13 percent jump in third-quarter net profit, helped by strong demand for services in the United States.

Indian software giant Infosys Technologies saw its shares dip nearly seven percent Friday after it reported a single digit rise in yearly revenues and also missed quarterly profit estimates.

Microsoft has ended support for its Windows 8 smartphones, as the US tech giant focuses on other segments, amid ongoing speculation about its strategy for mobile.

Pilotless aircraft, flying electric vehicles and bespoke air cabins are the future of flight, Airbus said Thursday.

A glove fitted with wearable electronics can translate the American Sign Language alphabet and then wirelessly transmit the text for display on electronic devicesall for less than $100, according to a study published July ...

Dutch researchers unveiled Tuesday a model of what could become within two decades a floating mega-island to be used as a creative solution for accommodating housing, ports, farms or parks.

Microsoft wants to extend broadband services to rural America by turning to a wireless technology that uses the buffer zones separating individual television channels in the airwaves.

What's the point of smart assistants and intelligent electricity meters if people don't use them correctly? In order to cope with the energy transition, we need a combination of digital technologies and smart user behaviour ...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Read the rest here:

India's Infosys eyes artificial intelligence profits - Phys.Org

Posted in Artificial Intelligence | Comments Off on India’s Infosys eyes artificial intelligence profits – Phys.Org

Artificial intelligence helps scientists map behavior in the fruit fly … – Science Magazine

Posted: at 5:13 am

Examples of eight fruit fly brains with regions highlighted that are significantly correlated with (clockwise from top left) walking, stopping, increased jumping, increased female chasing, increased wing angle, increased wing grooming, increased wing extension, and backing up.

Kristin Branson

By Ryan CrossJul. 13, 2017 , 1:00 PM

Can you imagine watching 20,000 videos, 16 minutes apiece, of fruit flies walking, grooming, and chasing mates? Fortunately, you dont have to, because scientists have designed a computer program that can do it faster. Aided by artificial intelligence, researchers have made 100 billion annotations of behavior from 400,000 flies to create a collection of maps linking fly mannerisms to their corresponding brain regions.

Experts say the work is a significant step toward understanding how both simple and complex behaviors can be tied to specific circuits in the brain. The scale of the study is unprecedented, says Thomas Serre, a computer vision expert and computational neuroscientist at Brown University. This is going to be a huge and valuable tool for the community, adds Bing Zhang, a fly neurobiologist at the University of Missouri in Columbia. I am sure that follow-up studies will show this is a gold mine.

At a mere 100,000 neuronscompared with our 86 billionthe small size of the fly brain makes it a good place to pick apart the inner workings of neurobiology. Yet scientists are still far from being able to understand a flys every move.

To conduct the new research, computer scientist Kristin Branson of the Howard Hughes Medical Institute in Ashburn, Virginia, and colleagues acquired 2204 different genetically modified fruit fly strains (Drosophila melanogaster). Each enables the researchers to control different, but sometimes overlapping, subsets of the brain by simply raising the temperature to activate the neurons.

Then it was off to the Fly Bowl, a shallowly sloped, enclosed arena with a camera positioned directly overhead. The team placed groups of 10 male and 10 female flies inside at a time and captured 30,000 frames of video per 16-minute session. A computer program then tracked the coordinates and wing movements of each fly in the dish. The team did this about eight times for each of the strains, recording more than 20,000 videos. That would be 225 straight days of flies walking around the dish if you watched them all, Branson says.

Next, the team picked 14 easily recognizable behaviors to study, such as walking backward, touching, or attempting to mate with other flies. This required a researcher to manually label about 9000 frames of footage for each action, which was used to train a machine-learning computer program to recognize and label these behaviors on its own. Then the scientists derived 203 statistics describing the behaviors in the collected data, such as how often the flies walked and their average speed. Thanks to the computer vision, they detected differences between the strains too subtle for the human eye to accurately describe, such as when the flies increased their walking pace by a mere 5% or less.

When we started this study we had no idea how often we would see behavioral differences, between the different fly strains, Branson says. Yet it turns out that almost every strain98% in allhad a significant difference in at least one of the behavior statistics measured. And there were plenty of oddballs: Some superjumpy flies hopped 100 times more often than normal; some males chased other flies 20 times more often than others; and some flies practically never stopped moving, whereas a few couch potatoes barely budged.

Then came the mapping. The scientists divided the fly brain into a novel set of 7065 tiny regions and linked them to the behaviors they had observed. The end product, called the Browsable Atlas of Behavior-Anatomy Maps, shows that some common behaviors, such as walking, are broadly correlated with neural circuits all over the brain, the team reports today in Cell. On the other hand, behaviors that are observed much less frequently, such as female flies chasing males, can be pinpointed to tiny regions of the brain, although this study didnt prove that any of these regions were absolutely necessary for those behaviors. We also learned that you can upload an unlimited number of videos on YouTube, Branson says, noting that clips of all 20,000 videos are available online.

Branson hopes the resource will serve as a launching pad for other neurobiologists seeking to manipulate part of the brain or study a specific behavior. For instance, not much is known about female aggression in fruit flies, and the new maps gives leads for which brain regions might be driving these actions.

Because the genetically modified strains are specific to flies, Serre doesnt think the results will be immediately applicable to other species, such as mice, but he still views this as a watershed moment for getting researchers excited about using computer vision in neuroscience. I am usually more tempered in my public comments, but here I was very impressed, he says.

See the rest here:

Artificial intelligence helps scientists map behavior in the fruit fly ... - Science Magazine

Posted in Artificial Intelligence | Comments Off on Artificial intelligence helps scientists map behavior in the fruit fly … – Science Magazine

Artificial Intelligence Will Help Hunt Daesh By December – Breaking Defense

Posted: at 5:13 am

Daesh fighters

THE NEWSEUM: Artificial intelligence is coming soon to a battlefield near you with plenty of help from the private sector. Within six months the US military will start using commercial AI algorithms to sort through its masses of intelligence data on the Islamic State.

We will put an algorithm into a combat zone before the end of this calendar year, and the only way to do that is with commercial partners, said Col. Drew Cukor.

Air Force intelligence analysts at work.

Millions of Humans?

How big a deal is this? Dont let the lack of generals stars on Col. Cukors shoulders lead you to underestimate his importance. He heads the Algorithmic Warfare Cross Function Team, personally created by outgoing Deputy Defense Secretary Bob Work to apply AI to sorting the digital deluge of intelligence data.

This isnt a multi-year program to develop the perfect solution: The state of the art is good enough for the government, he saidat the DefenseOne technology conference here this morning. Existing commercial technology can be integrated onto existing government systems.

Were not talking about three million lines of code, Cukor said. Were talking about 75 lines of code placed inside of a larger software (architecture) that already exists for intelligence-gathering.

For decades, the US military has invested in better sensors to gather more intelligence, better networks to transmit that data, and more humans to stare at the information until they find something. Our work force is frankly overwhelmed by the amount of data, Cukor said. The problem, he noted, is staring at things for long periods of time is clearly not what humans were designed for. U.S. analysts cant get to all the data we collect, and we cant calculate how much their bleary eyes miss of what they do look at.

We cant keep throwing people at the problem. At the National Geospatial Intelligence Agency, for example, NGA mission integration director Scott Currie told the conference, if we looked at the proliferation of the new satellites over time, and we continue to do business the way we do, wed have to hire two million more imagery analysts.

Rather than hire the entire population of, say, Houston, Currie continued, we need to move towards services and algorithms and machine learning, (but) We need industrys help to get there because we cannot possibly do it ourselves.

Private Sector Partners

Cukors task force is now spearheading this effort across the Defense Department. Were working with him and his team, said Dale Ormond, principal director for research in the Office of the Secretary of Defense. Were bringing to bear the combined expertise of our laboratory system across the Department of Defense complex.

Were holding a workshop in a couple of weeksto baseline where we are both in industry and with our laboratories, Ormond told the conference. Then were going to have a closed door session (to decide) what are the investments we need to make as a department, what is industry doing (already).

Just as the Pentagon needs the private sector to lead the way, Cukor noted, many promising but struggling start-ups need government funding to succeed. While Tesla, Google, GM, and other investors in self-driving cars are lavishly funding work on artificial vision for collision avoidance, theres a much smaller commercial market for other technologies such as object recognition. All a Google Car needs to know about a vehicle or a building is how to avoid crashing into it. A military AI needs to know whether its a civilian pickup or an ISIS technical with a machinegun in the truck bed, a hospital or a hideout.

An example of the shortcomings of artificial intelligence when it comes to image recognition. (Andrej Karpathy, Li Fei-Fei, Stanford University)

These are not insurmountable problems, Cukor emphasized. The Algorithmic Warfare project is focused on defeating Daesh, he said, not on recognizing every weapon and vehicle in, say, the Russian order of battle. He believes there are only about 38 classes of objects the software will need to distinguish.

Its not easy to program an artificial intelligence to tell objects apart, however. Theres no single Platonic ideal of a terrorist you can upload for the AI to compare real-life imagery against. Instead, modern machine learning techniques feed the AI lots of different real-world data the more the better until it learns by trial and error what features every object of a given type has in common. Its basically the way a toddler learns the difference between a car and a train (protip: count the wheels). This process goes much faster when humans have already labeled what data goes in what category.

These algorithms need large data sets, and were just starting labeling, Cukor said. Its just a matter of how big our labeled data sets can get. Some of this labeling must be done by government personnel, Cukor said; he didnt say why, but presumably this includes the most highly classified material. But much of it is being outsourced to a significant data-labeling company, which he didnt name.

This all adds up to a complex undertaking on a tight timeline something the Pentagon historically does not do well. I wish we could buy AI like we buy lettuce at Safeway, where we can walk in, swipe a credit card, and walk out, Cukor said. There are no shortcuts.

Go here to read the rest:

Artificial Intelligence Will Help Hunt Daesh By December - Breaking Defense

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Will Help Hunt Daesh By December – Breaking Defense

IBM Lags in Artificial Intelligence: Jefferies | Investopedia – Investopedia

Posted: July 13, 2017 at 7:10 am

At a time when all sorts of technology companies are getting accolades for their artificial intelligence prowess, International Business Machines Corp. (IBM) is apparently struggling, leading Wall Street investment firm Jefferies to lower its price target on the stock.

Citing checks that show a slow AI adoption rate, Jefferies analyst James Kisner cut his price target on Big Blue to $125 from $135 a share, implying the stock could fall more than 18%. In a research note to clients, the analyst called IBM outgunned in the war for AI talent and argued that it's a problem that will only get worse. (See also: The Other Side of IBM's Watson AI Solution.)

Our checks suggest that IBMs Watson platform remains one of the most complete cognitive platforms available in the marketplace today. However, many new engagements require significant consulting work to gather and curate data, making some organizations balk at engaging with IBM, wrote the analyst in the research report covered by 24/7 Wall Street.

Whats more, the analyst said that with a lot of companies making significant investments in AI and a slew of startups splashing on the scene, IBM is having a hard time luring top talent to the company. Kisner poured over job listings and found that Amazon.com Inc. (AMZN) has 10 times more for AI professionals than IBM. It doesnt help that businesses have lots of AI options, which is why the company reduced the pricing for Watson Conversations by 70% last October, the analyst argued. (See also: How Much Money Would You Have if You Followed Buffett into IBM?)

While Jefferies thinks IBM is behind when it comes to AI, that doesnt mean the company hasnt been making strides to grow that side of the business. In March it announced a strategic deal with Salesforce.com (CRM) to jointly provide AI services and data analytics offerings that help businesses make faster and smarter decisions. Watson is a cognitive system capable of learning from earlier interactions, garnering knowledge and value over time, and thinking like a human. It works by combining AI and advanced analytical software for analysis of various forms of data, thereby providing optimal responses based on reasoning and interacting like a question-answering machine.

Salesforce Einstein is the core AI technology that powers the Salesforce CRM platform by using data mining and machine learning algorithms. It aims to proactively spot trends across sales, services and marketing systems. The system is designed to forecast behavior that could spot up-sale prospects and opportunities, or identify crisis situations in advance. Under the deal, IBMs Watson and Salesforces Einstein will be integrated to offer intelligent customer engagement across various functions like sales, service, marketing and e-commerce.

View post:

IBM Lags in Artificial Intelligence: Jefferies | Investopedia - Investopedia

Posted in Artificial Intelligence | Comments Off on IBM Lags in Artificial Intelligence: Jefferies | Investopedia – Investopedia

‘Many’ ways to create artificial intelligence. Just ask the UK’s AI businesses – The Register

Posted: at 7:10 am

Nothing brings a smile to the face of Sabine Toulson co-founder in 1995 of Intelligent Financial Systems faster than the notion that AI and its associated technologies are something new.

Both Sabine and husband Darren were graduates of UCLs Artificial Intelligence Lab alongside other veteran entrepreneurs such as Jason Kingdon, who founded UCL spinout Searchspace, which was famous at the time for the quality of its anti-money laundering software.

Searchspace has been using machine learning techniques for years to combat money laundering, employing tools that compared millions of transactions and distinguished between legitimate and fraudulent transactions between buyers and sellers.

Like Searchspace, Intelligent Financial Systems (IFS) succeeded early in cracking the difficult US financial software market. Back in 2000, the company won a contract to study and analyse the enormous volumes of data emerging daily from the Chicago Board of Trade. It was an exceptional feat, and not just because the board had given the contract to a non-US company. The episode reflects the very strong US interest both then and now in the future of the UKs AI sector.

IFS the subject of many a takeover offer continues to produce trading software for the London Stock Exchange, big Japanese banks and Euronext-LIFFE, among others.

That early handful of AI wizards has grown and in the past few years especially after Google and Twitter bought some very young UK AI companies for huge sums interest in AI applications among a new generation exploded.

At the same time, big improvements in computing power have accelerated a revolution in AI with Alphabet, Amazon, Apple, Facebook and Microsoft all invested heavily. Much of the popular, if febrile, debate has concentrated on whether AI and their Earthly agents, robots will do us out of jobs and, ultimately, dominate us.

In practice, few realise how ubiquitous AI has already become among SMEs. By 2017 one index of SMEs found that no fewer than 192 UK companies claimed to be adopting some form of what they defined as AI or machine learning into their operations spanning IT, medicine, biotech, the professions, security, and games.

These firms range from newcomers such as advertising decision-maker Adbrain to smart tracking micro firm Armadale Technologies, developing an Intelligent Video Surveillance (IVS) system aimed at analysing and predicting human behaviour. These companies employ word or visual matching, pattern recognition and cluster mapping techniques of pure machine learning.

In 2010 Assessment21 used AI to mark exam papers electronically. The software was originally written to help Manchester University cut the costs of setting, administering and marking traditional paper exams. Assessment21 tests students online and is apparently capable of assessing a variety of question types.

Academic software to auto-mark multiple-choice questionnaires is now standard. But Assess By Computer, Assessment21s product, can mark complex, open-ended questions that test students understanding not just their memory. The software picks up on key words in students answers and allows them to be evaluated against a model answer. It can highlight answers that are similar, and be used as an anti-plagiarism tool.

Dr David Alexander Smith, meanwhile, is the key man at Matchdeck a rival to Experian that offers an introductory service to 16 million companies, fitting buyers to sellers. The firm crunches records using data models and matching algorithms, employing something it calls an AI web extraction engine and a semantic big-linked data platform.

But what exactly is AI in this context? Its a big topic with lots of related subjects and theres plenty of hype right now. Ian Page, a former Oxford academic, entrepreneur, and now director of Seven Spires Investments, reckons on many approaches to creating AI. This allows many Brit tech and engineering SMEs to coalesce under the broader AI umbrella.

The one that is the hottest news right now is based on a much-simplified model of how individual brain cells (neurons) might connect together and process information. These Neural Nets have been around for decades but it is only with recent reductions in the cost of powerful computers that researchers have been able to build much more complex neural nets, the so-called Deep Neural Nets, and to find ways of training those DNNs on vast amounts of data, he notes.

The result is software that is able to learn, or update itself through the activity of searching and discovering patterns, connections and linkages in large volumes of data pinpointing the sort of lateral thinking that we used to believe only the human brain was capable of achieving.

In the 1990s, Pages research group implemented AI algorithms of different types: neural networks, simulated annealing, genetic/evolutionary algorithms, cellular automata, and even a singing synthesiser.

But, in his view, computers and AI software will still have a hard time competing in real world functions with the human brain. It cant be irrelevant that the human/mammalian brain has lots of diverse physical structure, Page said.

Whatever the human brain is doing, it definitely is not doing it within a single architectural paradigm. So, if nature and evolution couldnt do it (general intelligence that is) within a single network of neurons, however big, then it seems odds on favourite that AI researchers wont be able to crack that problem either within the framework of only DNNs.

Neural networks today typically have a few thousand to a few million units and millions of connections. Hilariously, their computing power is similar to the brain of a worm and several orders of magnitude simpler than a human brain.

Perhaps the most interesting fact is the way ordinary UK companies those outside the Silicon Roundabout bubble and beyond the blinkers of those focussed on digital personal assistants like Siri have forged products, processes and markets across the widest range of applications.

IntelliMon part of STS Defence this year introduced a satellite-linked monitoring technology that can monitor the biggest marine diesel engines on the high seas and transmit a simple health score to a vessels operator thousands of miles away. The system employs a combination of sensors to capture vast amounts of data and machine learning.

Being able to predict when a supertanker, container vessel or cruise ship needs to be brought into port for engine maintenance can avoid breakdowns at sea, saving six-figure sums for shipping owners and management companies.

The innovation lies primarily in the algorithms devised by the Institute of Industrial Research at the University of Portsmouth. They analyse vibration readings by extracting key engine performance indicators that can be translated into basic, byte-sized health score information. These can then be sent back to shore via satellite link or, potentially, even using the vessels own automatic ID transponder.

David Garrity, STS Defence chief scientist, said: We began work with 450 tests of different faults created on a purpose-built diesel engine test rig [we] developed which operated at constant speed bands, mimicking engines on ships. Other potential applications lie in off-road vehicles, whether battle tanks or earth movers, and remote diesel generators in oil and gas installations.

Earlier, in October 2016, it had designed an electronic personal protection system designed to detect and predict the rapid rise in temperature that precedes a flashover incident for the emergency services. Thermal sensors use artificial intelligence to analyse the rapidly changing temperatures in a smoke-filled contained-fire environment where firefighters frequently operate. Its warnings give fire fighters vital time to flee.

Rainbird Technologies has won an enviable contract with financial services giant Mastercard. The payments giant will use its smarts to power an automated, virtual sales assistant. Rainbird claims to offer a cognitive reasoning platform, something that uses Machine Learning and lots of relevant data to make recommendations. With Mastercard, Rainbird will use the experience gleaned from the entire sales team and the thousands of customer conversations, to help predict which calls might convert to sales.

The UK AI ventures and projects are as strong as they were more than 25 years ago when Sabine got off that plane from Chicago with a contract in her pocket.

We'll be covering machine learning, AI and analytics and ethics at MCubed London in October. Full details, including early bird tickets, right here.

Originally posted here:

'Many' ways to create artificial intelligence. Just ask the UK's AI businesses - The Register

Posted in Artificial Intelligence | Comments Off on ‘Many’ ways to create artificial intelligence. Just ask the UK’s AI businesses – The Register

A Blueprint for Coexistence with Artificial Intelligence – WIRED

Posted: at 7:10 am

For most of my adult life, I have been maniacally focused on my work. I would answer emails instantly during the day, and even get up twice each night to ensure that all the emails were answered. Yes, I would spend time with my family membersbut just so they didnt complain, and not an hour more.

Then in September 2013, I was diagnosed with fourth-stage lymphoma. I faced the real possibility that my remaining time on Earth would be measured in months. As terrifying as that was, one of my strongest feelings was an instant, irretrievable, and painful regret. As Bronnie Wares book about regrets of people on their deathbeds all too accurately describes, I was wracked with remorse over not spending more time sharing love with the people I cared about most.

Kai-Fu Lee , Ph.D., is the Founder and CEO of Sinovation Ventures and the president of its Artificial Intelligence Institute.

Sign up to get Backchannel's weekly newsletter.

I am now in remission, so I can write this piece. I am spending much more time with my family. I moved closer to my mother. Whether on business or for pleasure, I travel with my wife. Formerly, when my grown kids came home, I would take two or three days off from work to see them. Now I take two or three weeks. I spend weekends traveling with my best friends. I took my company on a one-week vacation to Silicon Valleytheir Mecca. I meet with young people who send me questions on Facebook. I have reached out to people I offended years ago and asked for their forgiveness and friendship.

This near-death experience has not only changed my life and priorities, but also altered my view of artificial intelligencethe field that captured my selfish attention for all those years. This personal reformation gave me an enlightened view of what AI should mean for humanity. Many of the recent discussions about AI have concluded that this scientific advance will likely take over the world, dominate humans, and end poorly for mankind. But my near-death experience has enabled me to envision an alternate ending to the AI storyone that makes the most of this amazing technology while empowering humans not just to survive, but to thrive.

My catharsis came at a point when we were losing perspective on AI. For much of my career, the great accomplishments of this scientific pursuit always seemed to be five years away. But recently they have been cascading one after another, most strikingly with AlphaGos victory in 2016. There is a feeling that HAL, the stubborn and deadly computer in 2001: A Space Odyssey , is looming at the gates, and a form of near-panic has set in. We are bombarded with dire predictions by a number of self-appointed futurists about superintelligence, singularity, cyborgs, and the unprovable claim that we live in a video game. These dystopian warnings are infectious, because they come from famous peopleand perhaps because they are reinforced by the familiar plots of science fiction.

Andrew McMillen

The Sleeper Autistic Hero Transforming Video Games

Alexis Sobel Fitts

When Companies Get Serious About Diversity, They Call Her

Susan Crawford

The Internet Ripoff Youre Not Protesting

Alexis Sobel Fitts

There's A Reason Women in Tech Are Finally Speaking Out

As someone who has worked on AI for 37 years, I assure you that there exists no engineering basis for these outlandish predictions. Science fiction is all fiction, and very little science, and it would be catastrophic for mankind to capitulate to these imaginative but irresponsible predictions.

Whats more, the real AI story is itself as fascinating as any noveland indeed, it has its dark side. The excitement behind AI today is largely due to a 2010 invention called deep learning, which uses massive amounts of data to optimize decision engines with superhuman accuracy. Given a massive amount of data in a particular domain, deep learning can be used to optimize single objective functions, such as win Go, minimize default rate, or maximize speech recognition accuracy.

The results have been spectacular. Armed with deep learning and other machine-learning technologies, AI has proven capable of matching or surpassing some of the most impressive human feats of intelligence. It has vanquished human world champions in Go and poker, and is already superior than the average person in recognizing faces, videos, or words from speech. Critical mobile and internet applications, such as search ranking, e-commerce recommendation, and speech agents like Siri and Alexa, arent even imaginable without AI.

Naturally, businesses are using AI to automate tasks that humans used to perform. These include chatbots for customer service, loan officers for approving loans, and security guards for checking IDs. For example, my team invested in a company called Smart Finance, which built an app that uses an AI as a loan officer. Initially, this company lost money due to a high rate of bad loansbut the AI learning kicked in, and with enough data accumulated, the bad loan rate dropped dramatically. It can now make a loan decision in seconds, with higher accuracy than a loan officer who takes hours. And it is infinitely scalable: This company will underwrite about 30 million loans this year, more than any bank that I know of. All of this happened in under two years.

David Paul Morris/Bloomberg

This is clearly threatening news for loan officers. The core functions of other jobssuch as tellers, tele-salespeople, paralegals, reporters, stock traders, research analysts, and radiologistswill gradually be replaced by AI software. And as robotics evolve, including semi-autonomous and autonomous hardware, AI will perform the labor of factory workers, construction workers, drivers, delivery people, and many others.

The AI revolution is on the scale of the Industrial Revolutionprobably larger and definitely faster. But while robots may take over jobs, believe me when I tell you there is no danger that they will take over . These AIs run narrow applications that master a single domain each time, but remain strictly under human control. The necessary ingredient of dystopia is General AIAI that by itself learns common sense reasoning, creativity, and planning, and that has self-awareness, feelings, and desires. This is the stuff of the singularity that the Cassandras predict. But General AI isnt here. There are simply no known engineering algorithms for it. And I dont expect to see them any time soon. The singularity hypothesis extrapolates exponential growth from the recent boom, but ignores the fact that continued exponential growth requires scientific breakthroughs that are unlikely to be solved for a hundred years, if ever.

woolzian/iStock

So based on these engineering realities, instead of discussing this fictional super-intelligence, we should focus on the very real narrow AI applications and extensions. These will proliferate quickly, leading to massive value creation and an Age of Plenty, because AI will produce fortunes, make strides to eradicate poverty and hunger, and give all of us more spare time and freedom to do what we love. But it will also usher in an Age of Confusion. As an Oxford study postulates, AI will replace half of human jobs, and many people will become depressed as they lose their jobs and the purpose that comes with gainful employment.

It is imperative that we focus on the certainty of these serious issues, rather than talking about dystopia, singularity, or super-intelligence. Perhaps the most vexing question is: How do we create enough jobs to place these displaced workers? The answer to this question will determine whether the alternate ending to the AI story will be happy or tragic.

One suggested solution is to try to move people to jobs that are a step or two ahead of what machines can do. The idea would be to transition people to jobs that require higher dexterity (e.g., retrain an assembly line worker to be a plumber), hidden talent (e.g., encourage an accountant to pursue her dream of becoming a comedian), or new skills (e.g., train a cooling expert for a giant AI data center). Of course we should try this, but these numbers would be infinitesimal compared to the number of jobs displaced. And it is only the rarest accountant who can kill it at the Comedy Cellar.

There are other optimists who try to hand-wave the problem away by saying that new jobs have been created with every technological revolution, so we should have faith. These modern Panglosses often cite the Industrial Revolution, the office revolution (typewriters, calculators, mimeograph machines, etc.), and the computer revolution as examples. As a well-known 2013 Oxford study by Carl Frey and Michael Osborne has shown, each of the previous revolutions created some jobs (such as assembly line workers) even as they destroyed others (trained hand-craftsmen). But in the upcoming AI revolution, when AI replaces humans for a task it often does so completely, without creating new jobs or tasks. So, we cannot expect AI to solve our employment problem. We must solve it for ourselves.

The answer I propose would never have come to me when I was myself somewhat of an automaton, living to work rather than the other way around. It was only my cancer diagnosis, and the sudden realization of what my own stupidity had made me miss, that led me to my suggestion. Our coexistence with artificial intelligence hinges on combining what is humanly unattainablethe hugely scaled narrow AI intelligence that will only get better at any given domainwith what we humans can uniquely offer to one another. And that is love. What makes us human is that we can love.

We are far from understanding the human heart, let alone replicating it. But we do know that humans are uniquely able to love and be loved. The moment when we see our newborn babies; the feeling of love at first sight; the warm feeling from friends who listen to us empathetically; the feeling of self-actualization when we help someone in need. Loving and being loved are what makes our lives worthwhile.

Kevin Kelly

The Myth of a Superhuman AI

David Weinberger

Our Machines Now Have Knowledge Well Never Understand

Steven Levy

How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over

Love is what will always differentiate us from AI. Narrow AI has no self awareness, emotions, or a heart. Narrow AI has no sense of beauty, fun, or humor. It doesnt even have feelings or self-consciousness. Can you imagine the ecstasy that comes from beating a world champion? AlphaGo bested the globes best player, but took no pleasure in the game, felt no happiness from winning, and had no desire to hug a loved one after its victory.

Despite what science fiction movies may portray, I can tell you responsibly that AI programs cannot love. Scarlett Johansson may have been able to convince you otherwisebecause she is an actress who drew on her knowledge of love.

Imagine a situation in which you informed a smart machine that you were going to pull its plug, and then changed your mind and gave it a reprieve. The machine would not change its outlook on life or vow to spend more time with its fellow machines. It would not grow, as I did, or serve others more generously.

Love is what is missing from machines. Thats why we must pair up with them, to leaven their powers with what only we humans can provide. Your future AI diagnostic tool may well be 10 times more accurate than human doctors, but patients will not want a cold pronouncement from the tool: You have fourth stage lymphoma and a 70 percent likelihood of dying within five years. That in itself would be harmful. Patients would benefit, in health and heart, from a doctor of love who will spend as much time as the patient needs, always be available to discuss their case, and who will even visit the patients at home. This doctor might encourage us by sharing stories such as, Kai-Fu had the same lymphoma, and he survived, so you can too. This kind of doctor of love would not only make us feel better and give us greater confidence, but would also trigger a placebo effect that would increase our likelihood of recuperation. Meanwhile, the AI tool would watch the Q&A between the doctor of love and the patient carefully, and then optimize the treatment. If scaled across the world, the number of doctors of love would greatly outnumber todays doctors.

The same idea could apply to lawyers, teachers, accountants, and wedding planners. In innumerable instances, excellent AI tools may emerge, but the human-to-human interface is critical to ensuring we feel listened to and cared for when we encounter important life events. We should encourage more people to go into service careers, choosing the ones into which they can pour their hearts and souls, spreading their love and experienceswhether as a passionate tour guide, an attentive concierge, a funny bartender, an infectious hair dresser, or an innovative sushi chef.

We should also work hard to invent new service jobs that deliver joy and love. Imagine a nutritional chef who comes to your home to cook only with fresh, organic, local ingredients. Or perhaps the season changer who changes and redecorates your closets seasonally, with flowers and aromas that make changing clothes a fun experience. Or perhaps an elderly companion who takes your aging parents to see a "doctor of love" when you cannot.

There will also be a big demand for social workers who answer the hotlines for displaced workers, dealing with their depression and anxiety. Volunteering service jobs today may turn into real jobs of the futurethat of assisting at a blood bank, teaching at an orphanage, mentoring at Scouts organizations, or being a sponsor at AA or the Veterans Recruitment Appointment. Each of these jobs will deliver love and empathyand there will be so many that we can replace many, if not all, of that 50 percent loss that comes from automation. Most importantly, the people filling these new jobs will fill our planet with love and joy.

So, this is the alternate ending to the narrative of AI dystopia. An ending in which AI performs the bulk of repetitive jobs, but the gap is filled by opportunities that require our humanity.

Can I guarantee that scientists in the future will never make the breakthroughs that will lead to the kind of general-intelligence computer capabilities that might truly threaten us? Not absolutely. But I think that the real danger is not that such a scenario will happen, but that we wont embrace the option to double down on humanity while also using AI to improve our lives. This decision is ultimately up to us: Whatever we choose may become a self-fulfilling prophecy. If we choose a world in which we are fully replaceable by machines, whether it happens or not, we are surrendering our humanity and our pursuit for meaning. If everyone capitulates, our humanity will come to an end.

Such a capitulation is not only premature and unproven, but also irresponsible to our legacy, our ancestors, and our maker. On the other hand, if we choose to pursue our humanity, and even if the improbable happen and machines truly replace us, we can then capitulate knowing that we did the responsible thing, and that we had fun doing it. We will have no regrets over how we lived.

I do not think the day will ever comeunless we foolishly make it happen ourselves. Let us choose to let machines be machines, and let humans be humans. Let us choose to use our machines, and love one another.

More:

A Blueprint for Coexistence with Artificial Intelligence - WIRED

Posted in Artificial Intelligence | Comments Off on A Blueprint for Coexistence with Artificial Intelligence – WIRED

How Artificial Intelligence Is Changing Storytelling – HuffPost

Posted: July 12, 2017 at 12:28 pm

Artificial Intelligence or AI can create dynamic content. Lets apply best use cases to our work as storytellers.

At this years Wimbledon Tennis Tournament, for example, IBMs artificial intelligence platform, Watson, had a major editorial role -- analyzing and curating the best moments and data points from the matches, producing Cognitive Highlight videos, tagging relevant players and themes, and sharing the content with Wimbledons global fans.

Intel just announced a collaboration with the International Olympic Committee (IOC) that will bring VR, 360 replay technology, drones and AI to future Olympic experiences. In a recent press release Intel notes, The power to choose what they want to see and how they want to experience the Olympic Games will be in the hands of the fans.

In the context of development, future technology will change the way we interact with global communities. Researchers at Microsoft are experimenting with a new class of machine-learning software and tools to embed AI onto tiny intelligent devices. These edge devices dont depend on internet connectivity, reduce bandwidth constraints and computational complexity, and limit memory requirements yet maintain accuracy, speed, and security all of which can have a profound effect on the development landscape. Specific projects focus on small farmers in poor and developing countries, and on precision wind measurement and prediction.

Microsofts technology could help push the smarts to small cheap devices that can function in rural communities and places that are not connected to the cloud. These innovations could also make the Internet of Things devices cheaper, making it easier to deploy them in developing countries, according to a leading Microsoft researcher.

But the fact is, the non-western setting is currently the greatest challenge for AR/VR platforms. Wil Monte, founder and Director of Millipede, one of our SecondMuse collaborators says currently VR/AR platforms are completely hardware reliant, and being a new technology, often require a specification level that is cost-prohibitive to many.

Monte says labs like Microsoft pushing the processing capability of machine learning, while crunching the hardware requirements will mean that the implementation of the technologies will soon be much more feasible in a non-western or developing setting. He says development agencies should be empowered to push, optimise and democratise the technology so it has as many use cases as possible, therefore enabling storytellers to deploy much needed content to more people, in different settings.

"From our experience in Tonga, I learned that while the delivery of content via AR/VR is especially compelling, the infrastructure restraints means that we need to 'hack' the normal deployment and distribution strategies to enable the tech to have the furthest reach. With Millipede's lens applied, this would be immersive and game-based storytelling content, initially delivered on touch devices but also reinforced through a physical board or card game to enable as much participation in the story as possible, Monte says.

According to Ali Khoshgozaran, Co-founder and CEO of Tilofy, an AI-powered trend forecasting company based in Los Angeles, content creation is one of the most exciting segments where technology can work hand in hand with human creativity to apply more data-driven, factual and interactive context to a story. For example, at Tilofy, they automatically generate insights and context behind all their machine generated trend forecasts. When it comes to accessing knowledge and information, issues of digital divide, low literacy, low internet penetration rate and poor connectivity still affect hundreds of millions of people living in rural and underdeveloped communities all around the world, Khoshgozaran says.

This presents another great opportunity for technology to bridge the gap and bring the world closer. Microsoft use of AI in Skypes real-time translator service has allowed people from the furthest corners of the world to connect -- even without understanding each others native language -- using a cellphone or a landline. Similarly, Googles widely popular translate service has opened a wealth of content originally created in one language to many others. Due to its constant improvements in quality and number of languages covered, Google Translate might soon enhance or replace human-centric efforts like project Lingua by auto translating trending news at scale, Khoshgozaran says.

Furthermore, technologies like the Google Tango and Apple ARKit can provide new opportunities says Ali Fardinpour, Research Scientist in Learning and Assessment via Augmented/Virtual Reality at CingleVue International in Australia. The opportunity to bring iconic characters out of the literature and history and bring them to every kid's mobile phone or tablet and educate them on important issues and matters in life can be one of the benefits of Augmented Reality Storytelling.

Fardinpour says this kind of technology can substitute for the lack of mainstream media coverage or misleading coverage to educate kids and even adults on the current development projects, I am sure there are a lot of amazing young storytellers who would love the opportunity to create their own stories to tell to inspire their communities. And this is where AR/AI can play an important role.

A profound view of the future of storytellers comes from Tash Tan, Co-Founder of Sydney based Digital Company S1T2. Tan is leading one of our immersive storytelling projects in the South Pacific called LAUNCH Legends aimed at addressing issues of healthy eating and nutrition through the use of emerging, interactive technologies. As storytellers it is important to consider that perhaps we are one step closer to creating a truly dynamic story arch with Artificial intelligence. This means that stories won't be predetermined or pre-authored, or curated but instead they will be emerging and dynamically generated with every action or consequence, Tan says, If we can create a world that is intimate enough and subsequently immersive enough we can perhaps teach children through the best protagonist of all -- themselves.

A version of this story first appeared on the United Nations System Staff College blog earlier today.

The Morning Email

Wake up to the day's most important news.

See more here:

How Artificial Intelligence Is Changing Storytelling - HuffPost

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence Is Changing Storytelling – HuffPost

Google’s Artificial Intelligence Destroyed the World’s Best Go Player. Then He Gave This Extraordinary Response – Inc.com

Posted: at 12:28 pm

It was billed as a battle of human intelligence versus artificial intelligence, man versus machine.

The machine won.

Just over a month ago, a Google computer program named AlphaGo competed against 19-year-old Chinese prodigy Ke Jie, the top-ranked player of what is believed to be the world's most sophisticated board game, Go. (According to Wikipedia, the number of possible moves in Go--a number estimated to be greater than the total count of atoms in the visible universe--vastly outweighs those in chess.)

Soon after losing the decisive second match in a series of three, Ke blamed his loss on the very element that separated him from his foe:

His emotions.

"I was very excited. I could feel my heart bumping," Ke told The New York Times in an interview. "Maybe because I was too excited I made some stupid moves.... Maybe that's the weakest part of human beings."

But this was just the beginning.

Fast forward one month later.

With some time to reflect, Ke Jie said the following in an interview (which was shared on Twitter by Demis Hassabis, founder and CEO of DeepMind, the company that developed AlphaGo):

"After my match against AlphaGo, I fundamentally reconsidered the game, and now I can see that this reflection has helped me greatly. I hope all Go players can contemplate AlphaGo's understanding of the game and style of thinking, all of which is deeply meaningful. Although I lost, I discovered that the possibilities of Go are immense and that the game has continued to progress. I hope that I too can continue to progress, that my golden era will persevere for a few more years, and that I will keep growing stronger."

Absolutely brilliant.

In a few short sentences, Ke demonstrated that what he felt was a weakness--the impact of emotion--was actually his greatest strength.

It's the hurt from losing that caused Ke to engage in self-reflection, caused him to find meaning in his loss. It's emotion that inspired him to pursue growth and progress.

I see this as a remarkable example of emotional intelligence (EI), the ability to make emotions work for you instead of against you. EI is about much more than identifying our natural abilities, tendencies, strengths, and weaknesses. It involves learning to understand, manage, and maximize all of those traits, so that you can:

When we develop emotional intelligence, failure isn't bad. It's just another learning opportunity. It's about cultivating a mindset of continuous growth, continuing the journey of self-improvement.

These are also very "human" elements.

I guess the machines didn't win after all.

See the original post:

Google's Artificial Intelligence Destroyed the World's Best Go Player. Then He Gave This Extraordinary Response - Inc.com

Posted in Artificial Intelligence | Comments Off on Google’s Artificial Intelligence Destroyed the World’s Best Go Player. Then He Gave This Extraordinary Response – Inc.com

Artificial Intelligence Poised to Improve Lives of People With Disabilities – HuffPost

Posted: at 12:28 pm

By Shari Trewin, IBM T.J. Watson Research Center and Chair, Association for Computing Machinery Special Interest Group on Accessible Computing (SIGACCESS)

Are you looking forward to a future filled with smart cognitive systems? Does artificial intelligence sound too much like Big Brother? For many of us, these technologies promise more freedom, not less.

One of the distinctive features of cognitive systems is the ability to engage with us, and the world, in more human-like ways. Through advances in machine learning, cognitive systems are rapidly improving their ability to see, to hear, and to interact with humans using natural language and gesture. In the process, they also become more able to support people with disabilities and the growing aging population.

The World Health Organization estimates that 15 percent of the global population lives with some form of disability. By 2050, people aged 60 and older will account for 22 percent of the world's population, with age-related impairments likely to increase as a result.

I'm cautiously optimistic that by the time I need it, my car will be a trusted independent driver. Imagine the difference it will make for those who cannot drive to be able to accept any invitation, or any job offer, without being dependent on having a person or public transport to get them there. . Researchers and companies are also developing cognitive technologies for accessible public transportation. For example, IBM, the CTA (Consumer Technology Association) Foundation, and Local Motors are exploring applications of Watson technologies to developing the world's most accessible self-driving vehicle, able to adapt its communication and personalize the overall experience to suit each passengers unique needs. Such a vehicle could use sign language with deaf people; describe its location and surroundings to blind passengers; recognize and automatically adjust access and seating for those with mobility impairments; and ensure all passengers know where to disembark.

The ability to learn and generalize from examples is another important feature of cognitive technologies. For example, in my smart home, sensors backed by cognitive systems that can interpret their data will learn my normal activity and recognize falls or proactively alert my family or caregivers before a situation becomes an emergency, enabling me to live independently in my own home more safely. My stove will turn itself on when I put a pot on, and I'll tell it "cook this pasta al dente," then go off for a nap, knowing it will turn itself off and has learned the best way to wake me.

All of this may sound futuristic, but in the subfield of computer science known as accessibility research, machine learning and other artificial intelligence techniques are already being applied to tackle obstacles faced by people with disabilities and to support independent aging. For example, people with visual impairments are working with researchers on machine learning applications that will help them navigate efficiently through busy and complex environments, and even to run marathons. Cognitive technologies are being trained to recognize interesting sounds and provide alerts for those with hearing loss; to recognize items of interest in Google Street View images, such as curb cuts and bus stops; to recognize and produce sign language; and to generate text summaries of data, tailored to a specific reading level.

One of the most exciting areas is image analysis. Cognitive systems are learning to describe images for people with visual impairment. Currently, making images accessible to the visually impaired requires a sighted person to write a description of the image that can then be read aloud by a computer to people who can't see the original image. Despite well-established guidelines from the World Wide Web Consortium (W3C), and legislation in many countries requiring alternative text descriptions for online images, they are still missing in many websites. Cognitive technology for image interpretation may, at last, offer a solution. Facebook is already rolling out an automatic description feature for images uploaded to its social network. It uses cognitive technologies to recognize characteristics of the picture and automatically generates basic but useful descriptions such as "three people, smiling, beach."

The possibilities for cognitive technology to support greater autonomy for people with disabilities are endless. We are beginning to see the emergence of solutions that people could only dream of a decade ago. Cognitive systems, coupled with sensors in our homes, in our cities and on our bodies will enhance our own ability to sense and interpret the world around us, and will communicate with us in whatever way we prefer.

The more that machines can sense and understand the world around us, the more they can help people with disabilities to overcome barriers, by bridging the gap between a person's abilities and the chaotic, messy, demanding world we live in. Big Brother may not be all bad after all.

The Morning Email

Wake up to the day's most important news.

Read the rest here:

Artificial Intelligence Poised to Improve Lives of People With Disabilities - HuffPost

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Poised to Improve Lives of People With Disabilities – HuffPost

Page 150«..1020..149150151152..160170..»