Daily Archives: May 20, 2017

In-Depth: AI in Healthcare- Where we are now and what’s next – MobiHealthNews

Posted: May 20, 2017 at 6:50 am

The days of claiming artificial intelligence as a feature that set one startup or company apart from the others are over. These days, one would be hard-pressed to find any technology company attracting venture funding or partnerships that doesnt posit to use some form of machine learning. But for companies trying to innovate in healthcare using artificial intelligence, the stakes are considerably higher, meaning the hype surrounding the buzzword can be deflated far more quickly than in some other industry, where a mistaken algorithm doesnt mean the difference between life and death.

Over the past five years, the number of digital health companies employing some form of artificial intelligence has dramatically increased. CB Insights tracked 100 AI-focused healthcare companies just this year, and noted 50 had raised their first equity rounds since January 2015. Deals in the space grew from under 20 in 2012 to nearly 70 in 2016. A recent survey found that more than half of hospitals plan to adopt artificial intelligence within 5 years, and 35 percent plan to do so within two years. In Boston, Partners HealthCare just announced a 10-year collaboration with GE Healthcare to integrate deep learning technology across their network. The applications for AI go far beyond just improving clinician workflow and processing claims faster.

The problem we are trying to solve is one of productivity, Andy Slavitt, the former acting administrator of the Centers for Medicare and Medicaid Services said during the Light Forum, a two-day conference that brought together CEOs, healthcare IT experts, policymakers and physicians at Stanford University last week. We need to be taking care of more people with less resources, but if we chase too many problems and business models or try to invent new gadgets, thats not going to change productivity. Thats where data and machine learning capabilities will come in."

Respondents to the hospital survey said the technology could have the most impact on population health, clinical decision support, diagnostic tools and precision medicine. Even drug development, real world evidence collection and clinical trials could be faster, cheaper and more accurate with AI. But the time to put all of our faith in AI is still not here. The human brain is a really strong prior on what makes sense, Andrew Maas, chief scientist and cofounder of Roam Analytics said during the Light Forum. Computers are powerful on assessing, but not on the level of reliability you will trust soon.

How do we get there?

So everybody wants it, but just how soon will we see the purported transformation of healthcare from machine learning? Lately, weve seen it in everything from the most straightforward app to the most complex diagnostic tasks, coming in the form of natural language processing or image recognition to powerful algorithms crunching databases made up of decades of medical research.

Like any other technology in healthcare, AI cant be brought in without a mountain of extra challenges including regulatory barriers, interoperability with legacy hospital IT systems, and serious limitations on access to crucial medical data needed to build powerful health-focused algorithms in the first place. But thats not stopping innovation, albeit cautious innovation, and digital health stakeholders are realizing that unlocking AIs true full potential requires strategic partnerships, quality data, and a sober understanding of statistics.

As the understanding of AI in healthcare matures, the biggest names in technology arent shying away from the mountainous challenges that come with innovating in the industry, like regulatory barriers, legal access to quality data and the constant issue of lack of interoperability. Just this week, Google announced it has built upon its tried and true consumer-level machine learning capabilities into healthcare. Google Brain, the companys research team, worked with the likes of Stanford, University of California San Francisco to acquire de-identified data from millions of patients.

Its more than that, as Google CEO Sundar Pichai explained at the tech giants Google I/O developer event last week. Last year, they launched the Tensor computing centers, which the company describes as AI-first data centers.

At Google, we are bringing all of our AI efforts together under Google.ai. Its a collection of efforts and teams across the company focused on bringing the benefits of AI to everyone, Pichai said. Google.ai will focus on three areas: Research, Tools and Infrastructure, and Applied AI.

In November, Google researchers published a paper in JAMA showing that Google's deep learning algorithm, trained on a large data set of fundus images, can detect diabetic retinopathy with better than 90 percent accuracy. Pichai said another area they are looking to apply AI is pathology.

This is a large data problem, but one which machine learning is uniquely equipped to solve, he said. So we built neural nets to detect cancer spreading to adjacent lymph nodes. Its early days but our neural nets show a much higher degree of accuracy: 89 percent, compared to 73 percent. There are important caveats we also have higher numbers of false positives but already getting this in the hands of pathologists, they can improve diagnosis.

Another example is Apples recent acquisition of AI company Lattice, which has a background in developing algorithms for healthcare applications.

Microsoft, too, is wading into the space. Just a couple of months ago, the company launched the Healthcare NExT initiative, which brings together artificial intelligence, cloud computing, research and industry partnerships. The initiative includes projects focused on genomics analysis and health chatbot technology, and a partnership with the University of Pittsburgh Medical Center. A couple of weeks ago, Microsoft partnered with data connectivity platform provider Validic to add patient engagement to their HealthVault Insights research project.

Weve seen AI in various forms in lots of startups, too, from Ginger.ios behavioral health monitoring and analytics platform Senselys virtual assistants to apps and wearables from companies like Ava which just released research with the University of Zurich and Clue, to predict fertility windows. Others, like the recently-launched Buoy Health, have created medical specific search engines. Buoy sources from over 18,000 clinical papers, covering 5 million patients and spanning 1,700 conditions. Beyond a symptom checker, Buoy starts by asking age, sex, and symptoms, then measures against the proprietary and granular data to decide which questions to ask next. Over about two to three minutes, Buoys questions narrow down to get more and more specific before offering individuals a list of possible conditions, along with options for what to do next.

Another promising area is medical imaging. In November, Israel-based Zebra Medical Vision, a machine-learning imaging analytics company, announced the launch of new platform that allows people upload and receive analysis of their medical scans from anywhere with an internet connection. Zebra launched in 2014 with a mission to teach computers to automatically analyze medical images and diagnose various conditions, from bone health to cardiovascular disease. The company has steadily built up an imaging database, which they are combining with deep learning techniques in order to developing algorithms to automatically detect and diagnose medical conditions. Another Israeli company with a similar offering is AiDoc, which just raised $7 million.

But no matter how big and powerful the technology company may be, the availability of patient data is what makes the difference between a buzzword or an algorithm that can diagnose or predict outcomes. Thats why many companies are in the training stage.

As Joe Lonsdale, CEO of venture capital firm 8VC said during the Light Forum at Stanford, The hard part is creating the data in the first place.

Dr. Maya Peterson, a professor of biostatistics at the University of California Berkeley School of Public Health, offered an even more sober view.

Relationships [between data] in the real world are complex, and we dont fully understand them, she said during HIMSS' Big Data and Healthcare Analytics Forum in San Francisco this week. And machine learning is overly ambitious in a way, as we are going into more complex questions. That isnt a good thing.

A good algorithm is hard to build

Machines can only learn from the data provided them, so researchers, engineers and entrepreneurs alike are busy assembling larger and higher quality databases.

Last month, Alphabet-owned Verily launched the Project Baseline Study, a collaborative effort with Stanford Medicine and Duke University School of Medicine to amass a large collection of broad phenotypic health data in hopes of developing a well-defined reference of human health. Project Baseline aims to gather data from around 10,000 participants, each of whom will be followed for four years, and will use that data to develop a baseline map of human health as well as to gain insights about the transitions from health to disease. Data will come in a number of forms, including clinical, imaging, self-reported, behavioral, and that from sensors and biospecimen samples. The studys data repository will be built on Google computing infrastructure and hosted on Google Cloud Platform.

If the government did data quality and data sharing initiatives, it would be a lot different, Andrew Maas, chief scientist at Roam Analytics (a San Francisco-based machine learning analytics platform provider focused on life sciences) said at the Light Forum. If the private sector wants to do that, and gather data in abundance, thats great. Give us that data and well be back and have something amazing in a year. But if data is not collected because people are scared, we cant do anything.

The availability of patient data and computing power means the difference between promises and actual impact. That brings us to IBM Watson Health, which has amassed giant amounts of data via numerous partnerships, teaching the cognitive computing models it claims will unlock vast amounts of insights on patient health. As actual evidence are yet to be fully realized, public opinion on IBM Watson is split. Some think it is the granddaddy of machine learning.

During the Light Forum, Chris Potts, Stanford Universitys director of Linguistics and Computer Science as well as the chief scientist at Roam Analytics, said Watson is arguably the most promising in health. Others arent so sure Social Capital CEO Chamath Palihapitiya called it a joke. But, as evidenced by the many collaborations we have reported on, that doesnt seem to be hindering the companys ability to take up new partners. Just last week, they joined MAP Health Management to bring their machine learning capabilities to substance abuse disorder treatment, and the research arm of IBM is working with Sutter Health to develop methods to predict heart failure based on under-utilized EHR data.

IBM Watson actually got its start in 2011, when the machine won a game of Jeopardy, inspiring the company to get to work putting the technology to use.

We had to train the technology for the medical domain, and there are many complexities there it varies by specialty, and thats all different in different parts of the world. We had to train system to understand language of medicine, Shiva Kumar, Watson Healths vice president and chief strategy officer said at the Light Forum. The first step is natural language processing. Can you know enough to start deriving insights? Can you do that at the point that you engage in dialogue to come up with best possible answers? Talk to patient, go a step further, assimilate, continue moving on.

To do that, IBM Watson must tackle the problem of unstructured data, Kumar explained.

We tend to use word cognitive computing, because it goes beyond machine learning and deep learning. Being able to derive insights, being able to integrate, and learn. Healthcare is unique; its highly regulated, and has a ton of data it cant use. And there are many silos, he said. So its a place where a lot of technology can improve it. But at the end of day, success is determined by practitioners.

How to move forward

Many experts predict AI will hit healthcare in waves Allscripts Analytics Chief Medical Officer Dr. Fatima Paruk told Beckers Hospital Review said she foresees the first applications in care management of chronic diseases, followed by developments that leverage the increasing availability of patient-centered health data along with environmental or socioeconomic factors. Next, genetic information, integrated into care management, will make precision medicine a reality.

Some of the areas where AI could make the biggest impact are those already notoriously late to the technology game: Pharmaceutical companies. But thats starting to change. During the Light Forum, Jeff Kindler, partner at Lux Capital and former chairman and CEO of Pfizer, called pharma the classic example of innovators dilemma, due to the fact that they have never been in a tight enough financial position to be required to shift their business model. But seeing the potential of AI to speed up the process is too hard to pass up, although it will take more communication between healthcare stakeholders to see where to apply AI.

If you talk to payers, and they dont know who pharma or big data or artificial intelligence, they think Im going to get screwed. So how does this trust gap get crossed? Kindler said during the Light Forum. Historically, pharma and device manufacturers were not distinguishing between the two because the data wasnt available; it was like throwing darts. But as AI and machine learning becomes more robust, you will have a separation between costs of operation and costs that dont matter because they are increasing efficiency.

Efficiency is a key area for drug development, especially in light of shakeups at the FDA that could make AI even more readily impactful.

I work in an industry where it takes 12 years to launch a product, Judy Sewards, Pfizers vice president of digital strategy and data innovation said at the Light Forum. Thats three presidential terms, or three World Cups. Over that time, it takes 1,600 scientists to look at research and 3,600 clinical trials involving thousands of patients. Where we start to think about AI is how can we speed up the process, make it smarter, connect breakthrough medicine and connect patients who need it the most? Whats bringing that to life, Sewards said, is the work they are doing with IBM Watson on immunocology.

Some worry that machines or AI will replace scientists or doctors, but it is actually more like they are the ultimate research assistant, or wingman, she said.

Rajeev Ronanki, Deloittes principal in life sciences and healthcare, told Beckers Hospital Review there needs to be a confluence of three powerful forces to drive the machine learning trend forward: exponential data growth, faster distributed systems, and smarter algorithms that interpret and process that data. When that trifecta comes together, Ronanki forecasts CIOs can expect returns in the form of cognitive insights to augment human decision-making, AI-based engagement tools, and AI automation within devices and processes to develop deep domain-specific expertise.

We expect the growth to continue, with spending on machine intelligence expected to rise to $31.3 billion, Ronanki told Beckers, citing an IDC report.

Where we are today is ground zero, basically, Roam Analytics CEO and cofounder Alex Turkeltaub said during the Light Forum. Were more or less figuring out the commercial pathway, and at best using masters level statistics, no more than that, because its hard to put data together and deal with regulation. Most of even the most cutting-edge deep learning algorithms were developed in the 60s, which were based on ideas from the 1600s. Weve got to figure out a better way.

Especially, since, as Pfizers Judy Sewards pointed out: In our industry, you need to be 100 percent. Error is someones life.

See the original post here:

In-Depth: AI in Healthcare- Where we are now and what's next - MobiHealthNews

Posted in Ai | Comments Off on In-Depth: AI in Healthcare- Where we are now and what’s next – MobiHealthNews

The First AI-Generated Paint Names Include ‘Homestar Brown’ and ‘Stanky Bean’ – Gizmodo

Posted: at 6:50 am

Screenshot/HomestarRunner.com

Humans arent nearly as creative as we think. Craft brewers, for example, have run out of fun names and are sending each other cease and desist letters for coming up with the same ideas. So, what if we let computers come up with new names for us?

Thats the problem optics industry research scientist Janelle Shane has been trying to solve using neural networks, but with paint colors. The initial results are downright ridiculous. Like Stanky Bean and Sindis Poop.

By problem, I actually mean she was just trying to have a good time online. What inspired me was I found a post online from someone whod done neural network cookbook recipes, she told Gizmodo. I thought they were hilarious and I wanted more, but there werent any more. The only way to get more was to make more.

Neural networks are essentially computer systemsthat can be trained on large datasets to solve problems like speech or pattern recognition. Shane analyzed a list of 7700 paint colors from Sherwin Williams with a neural network called char-rnn, including both the paint names and their red, blue, and green values.

Once a neural network is trained, it can learn to find the next logical thing based on an input, which is how we ended up with those strange dog pictures last year. In this case, the neural network starts with a letter, then picks the next most logical letter (or a letter further down on the list, depending on the creativity setting) to create pronounceable words. Its like a child learning to speak if its parents only spoke about paint colors.

Shane had the network spit out names during checkpoints as it was learning, at varying levels of creativity. Naturally, its most creative setting eventually started spitting out gibberish:

(I am having trouble breathing from how hard I am laughing right now.)

But eventually, it learned to make some really wacky paint names.

Im sure Hurky wasnt in the original dataset, said Shane. But somehow its come up with that.

Shane previously trained a neural network to come up with new recipe names, creating some of the funniest combinations of words imaginable.

Its tempting to correct the spelling if it almost spells a word, but somehow that takes the fun out of it, said Shane. This is as it comes out of the computer, Im not changing a thing.

Shanes just doing this for fun, but heres the link to char-rnn if youve got your own ideas.

[Postcards from the Frontiers of Science]

Read the original here:

The First AI-Generated Paint Names Include 'Homestar Brown' and 'Stanky Bean' - Gizmodo

Posted in Ai | Comments Off on The First AI-Generated Paint Names Include ‘Homestar Brown’ and ‘Stanky Bean’ – Gizmodo

Now artificial intelligence is inventing sounds that have never been heard before – ScienceAlert

Posted: at 6:50 am

As well as beating us at board games, driving cars, and spotting cancer, artificial intelligence is now generating brand new sounds that have never been heard before, thanks to some advanced maths combined with samples from real instruments.

Before long, you might hear some of these fresh sounds pumping out of your radio, as the researchers responsible say they're hoping to give musicians an almost limitless new range of computer-generated instruments to work with.

The new system is called NSynth, and it's been developed by an engineering team called Google Magenta, a small part of Google's larger push into artificial intelligence.

"Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer," explains the team.

You can check out a couple of NSynth samples below, courtesy of Wired:

NSynth takes samples from about a thousand different instruments and blends two together them together, but in a highly sophisticated way. First, the AI program learns to identify the audible characteristics of each instrument so they can be reproduced.

That detailed knowledge is then used to produce a mix of instruments that doesn't sound like a mix of instruments the properties of the audio are adjusted to create something that sounds like a single, new instrument rather than a mash of multiple sounds.

So instead of having a flute and violin play together, you've got a brand new, algorithm-driven digital instrument somewhere between the two. How much of the flute and how much of the violin are in the final sound is up to the musician.

Like many of Google's AI initiatives, NSynth relies on deep learning: a specific approach to AI where vast amounts of data can be processed in a similar way to the human brain, which is why these systems are often described as artificial neural networks.

So not only can deep learning systems use millions of cat pictures to correctly identify a cat, for instance, they can also learn from their mistakes and get better over time, teaching themselves how to improve just like our brains do.

The idea of deep learning has been around for decades, but we're only now seeing the kind of software and computing power appear that can make it a reality.

One consequence of that is that the NSynth demos built by the Google Magenta team all work in real time, allowing new compositions to be created.

Music critic Marc Weidenbaum told Wired that Google's new approach to the traditional trick of combining instruments together shows promise.

"Artistically, it could yield some cool stuff, and because it's Google, people will follow their lead," he said.

Google engineers have just been demoing NSynth at the Moogfest festival, and you can read a paper on their work at arXiv.

Read more:

Now artificial intelligence is inventing sounds that have never been heard before - ScienceAlert

Posted in Artificial Intelligence | Comments Off on Now artificial intelligence is inventing sounds that have never been heard before – ScienceAlert

How Artificial Intelligence will impact professional writing – TNW

Posted: at 6:50 am

Professional writing isnt easy. As a blogger, journalist or reporter, you have to meet several challenges to stay at the top of your trade. You have to stay up to date with the latest developments and at the same time write timely, compelling and unique content.

The same goes for scientists, researchers and analysts and other professionals whose job involves a lot of writing.

With the deluge of information being published on the web every day, things arent getting easier. You have to juggle speed, style, quality and content simultaneously if you want to succeed in reaching your audience.

Fortunately, Artificial Intelligence, which is fast permeating every aspect of human life, has a few tricks up its sleeve to boost the efforts of professional writers.

In 2014, George R. R. Martin, the acclaimed writer of the Song of Ice and Fire saga, explained in an interview how he avoids modern word processors because of their pesky autocorrect and spell checkers.

Software vendors have always tried to assist writers by adding proofreading features to their tools. But as writers like Martin will attest, those efforts can be a nuisance to anyone with more-than-moderate writing skills.

However, that is changing as AI is gettingbetter at understanding the context and intent of written text. One example is Microsoft Words new Editor feature, a tool that uses AI to provide more than simple proofreading.

Editor can understand different nuances in your prose much better than code-and-logic tools do. It flags not only to grammatical errors and style mistakes, but also the use of unnecessarily complex words and overused terms. For instance, it knows when youre using the word really to emphasize a point or to pose a question.

It also gives eloquent descriptions of its decisions and provides smart suggestions when it deems something as incorrect. For example if it marks a sentence as passive, it will provide a reworded version in active voice.

Editor has been well received by professional writers (passive voice intended), though its still far from perfect.

Nonetheless AI-powered writing assistance is fast becoming a competitive market. Grammarly, a freemium grammar checker that installs as a browser extension, uses AI to help with all writing tasks on the web. Atomic Reach is another player in the space, which uses machine learning to provide feedback on the readability of written content.

Writing good content relies on good reading. I usually like to go through different articles describing conflicting opinions about a topic before I fire up my word processor. The problem is theres so much material and so little time to readall of it. And things tend to get tedious when youre trying to find key highlights and differences between articles written about a similar topic.

Now, Artificial Intelligence is making inroads in the field by providing smart summaries of documents. An AI algorithm developed by researchers at Salesforce generates snippets of text that describe the essence of longtext. Though tools for summarizing texts have existed for a while, Salesforces solution surpasses others by using machine learning. The system uses a combination of supervised and reinforced learning to get help from human trainers and learn to summarize on its own.

Other algorithms such as Algorithmias Summarizer provide developers with libraries that easily integrate text summary capabilities into their software.

These tools can help writers skim througha lot of articles and find relevant topics to write about. It can also help editors to read through tons of emails, pitches and press releases they receive every day. This way theyll be better positioned to decide which emails need further attention. Having hundreds of unread emails in my inbox, I fully appreciate the value this can have.

Advances in Natural Language Processing have contributed widely to this trend. NLP helps machines understand the general meaning of text and relations between different elements and entities.

To be fair, nothing short of human level intelligence can have the commonsense knowledge and mastery of language required to provide flawless summary of all text. The technology still has more than and few kinks to iron out, but it shows a glimpse of what the future of reading might look like.

No matter how high-quality and relevant your content is, itll be of no use if you cant reach out to the right audience. Unfortunately, old keyword-based search algorithms pushed online writers toward stuffing their content with keywords in order to increase their relevance for search engine crawlers.

Although with PageRank, Google did a great job in organizing the web, it also created a web where keywords ruled over content, says Gennaro Cuofano, growth hacker at WordLift, a company that develops tools for semantic web. Eventually, web writers ended up spending a significant amount of time improving the findability. The trend resulted in poor quality writing getting higher search ranking.

But thanks to Artificial Intelligence, search engines are able to parse and understand content, and the rules of search engine optimization have changed immensely in past years.

Since new semantic technologies are now mature enough to read human language, journalists and professional writers can finally go back to writing for people, Cuofano says. This means you can expect more quality content to appear both on websites and search engine results.

Where do we go from here? The next revolution (which is already coming) is the leap from NLP to a subset of it called NLU (Natural Language Understanding), Cuofano says. In fact, while NLP is more about giving structure to data, defining it and making it readable by machines; NLU instead is about taking unclear, unstructured and undefined inputs and transforming them to an output that is close to human understanding.

Were already seeing glimmers of this next generation in AI-powered journalism. The technology is still in its infancy, but will not remain so indefinitely. Writing can someday become a full-time machine occupation, just like many other tasks that were believed to be the exclusive domain of human intelligence the past.

How does this affect writing? Currently, the web is a place where how-to articles, tutorials and guides are dominant, Cuofano says. This makes sense in an era where people are still in charge of most tasks. Yet in a future where AI takes over, wouldnt it make more and more sense to write about why we do things? Thus, instead of focusing on content that has a short shelf life, we can focus again on content that has the capability to outlive us.

Read next: Apple blames exploding Beats headset on AAA batteries

Read the rest here:

How Artificial Intelligence will impact professional writing - TNW

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence will impact professional writing – TNW

Techstars: How Artificial Intelligence Can Make The Music Industry … – Forbes

Posted: at 6:50 am


Forbes
Techstars: How Artificial Intelligence Can Make The Music Industry ...
Forbes
New darling tech startups are transforming the music value chain to the industry's benefit.
Can Artificial Intelligence Save The Music Industry? - UproxxUPROXX

all 2 news articles »

The rest is here:

Techstars: How Artificial Intelligence Can Make The Music Industry ... - Forbes

Posted in Artificial Intelligence | Comments Off on Techstars: How Artificial Intelligence Can Make The Music Industry … – Forbes

How artificial intelligence might help achieve the SDGs – Devex

Posted: at 6:50 am

Sid Dixit of Planet presents at Train AI in San Francisco. Photo by: Catherine Cheney / Devex

Machine learning is the ultimate way to go faster, said Peter Norvig, director of research at Google, as he showed a slide image of a race car to a crowd of professionals gathered to learn more about artificial intelligence.

But speed can also lead to accidents, Norvig warned, clicking to another slide with an image of a dramatic crash. Norvig, who is also the author of a leading textbook on artificial intelligence, or AI, was speaking at Train AI, a conference hosted by CrowdFlowerin San Francisco, California, this week.

In every industry, theres a place where AI can make things better, Norvig told Devex. Look at all of the AI technologies, and all problems, and its just a question of fitting them together and figuring out, whats the right technological match and whats the right policy match?

Machine intelligence will have profound implications for the development sector. AI is a way to understand data, Norvig continued, and the global development community will be unable to understand and act on information coming in from cell phones to satellites without both human and machine intelligence.

Norvig will also speak at the upcoming AI for Global Goodsummit hosted at the International Telecommunications Unitin Geneva, Switzerland. Gatherings such as these help technologists connect solutions to problems, he said. From Silicon Valley to Hyderabad, India, where the ninth annual Information and Communications Technology for Development, or ICT4D, conferencehas just taken place, there is growing interest in bringing the technology community together with the global development community, in order to leverage AI to achieve the Sustainable Development Goals.

Every day we see a new news report on how AI is changing the future of every part of society, said Robin Bordoli, chief executive officer of CrowdFlower, which describes itself as an essential platform for data science. Despite some of the concerns around job loss, we believe in the power of AI to create positive change at all levels of society.

That is the thinking behind the launch of CrowdFlowers AI for Everyonechallenge to put the power of AI into the hands of people who want to use machine intelligence to solve social problems. The company expects to see applications addressing global challenges in areas such as health care, food and nutrition, and climate change, and it is just the latest in a number of similar such competitions.

Earlier this month, XPRIZE, which puts on competitions together with partners, announced the 147 teamsfrom 22 countries that would advance in the $5 million IBM Watson AI XPRIZE. The teams entering this global competition are working to develop AI applications that demonstrate how humans, together with AI, can tackle global challenges. Examples include Harvesting, a global intelligence platform for agriculture.

With any automated and digital system you have to make sure you are not shutting out or creating unintended problems for people who can't read, don't have devices, or otherwise are not able to access the new system.

And the Digital Financial Services Innovation Lab, an early stage incubator for entrepreneurs building financial technology companies in developing countries, has open challenges for biometrics and chatbots, with a deadline of May 30. DFS Lab is housed within Caribou Digital, a research and delivery consultancy in Seattle, Washington. TheBill & Melinda Gates Foundationfunded the incubator to engage top scientists and engineers in challenges such as these around boosting financial inclusion, said Jake Kendall, director of the DFS Lab, who was formerly on the Financial Services for the Poor team at the Gates Foundation.

Human interactions are always going to be necessary, but any time you can remove the need for them from a process through automation or NLP [natural language processing] conversational interfaces, that can be a game changer in terms of scalability and efficiency, he told Devex via email. NLP and bots give people tools to help themselves in the digital realm which can be really empowering. But there are downsides. With any automated and digital system you really have to make sure you are not shutting out certain people or creating unintended problems for people who can't read, don't have devices, or otherwise are not able to access the new system.

In the popular imagination, AI can feel like something that solves the problems of the affluent with products such as Alexa, the Amazon device that allow users to get information, play music, or control their smart homes using their voices. But many experts in the field believe there is a major role for AI in helping achieve the SDGs, and the founder of Arifu is making the casefor the role of chatbots and AI in achieving the SDGs at the ICT4D conference. His education technology company, which launched in Kenya in 2015, points to how a chatbot leveraging AI can deliver personalized learning on mobile devices to provide access to information on topics such as farming, entrepreneurship or financial literacy to the worlds least served.

Were moving from the enterprise and the abstract to the consumer and the personal, said Robert Munro, principal product manager at Amazon AI, at Train AI.

What that means for global health, for example, is a shift toward point of care tests, even in resource limited settings, he said.

Munros talk in San Francisco revisited AIs progress after a series of talks he gave five years ago called Wheres My Talking Robot. AI is now making more decisions in our lives than most people realize, he said. Its making us smarter, choosing our friends, selecting our news, aiding our health, moving us around, and protecting our security.

For example, he mentioned how the first alert for the swine flu outbreak in Mexico came from reading AI reports about potential disease outbreaks.

However, at a conference covering high-definition mapping, AI and medicine, and deep learning, examples of applications of machine learning in developing countries were few and far between.

Lukas Biewald, founder & executive chairman of CrowdFlower, did talk about how one of his clients is using drones for conservation.

And in a series of presentations from CrowdFlower customers, Sid Dixit, director of product program management at Planet, talked about how AI combined with millions of images from its small satellites can determine the health of forests and water resources, and monitor harvests and agriculture everywhere.

Anthony Goldbloom, CEO of Kaggle, talked about how Genentech, a biotechnology corporation,opened a challengeon his platform for machine learning competitions to predict which women would not be screened on schedule for cervical cancer, a largely preventable disease that several leaders in the global health community, including PATHin Seattle, Washington,are saying needs more attention in developing countries.

But the list of examples of applications of AI to the SDGs continues to grow. This week, at ICT4D, the international agriculture research consortium known as CGIAR launched a platform forbig data in agriculture. It unites agricultural research institutes and companies with the goal of closing the digital divide between farmers in developed and developing countries. Amazon will bring its cloud computing and data processing capabilities, IBM, creator of the Watson artificial intelligence system, will bring its data analysis, and PepsiCo will bring its use of big data to manage supply chains.

One of the lines that came up at the AI conference in San Francisco was recent comments by physicist Stephen Hawking, who said thatthis technology will be either the best, or the worst thing, ever to happen to humanity.

Silicon Valley is behind a number of initiatives working to ensure that AI benefits humanity, including OpenAI, a nonprofit AI research company, to ensure that the benefits of machine learning are as widely and evenly distributed as possible.

And increasingly, forward looking thinkers in the global development community are presenting themselves as natural partners in these efforts, as the ITU has done in organizing the AI for Global Good Summit together with XPRIZE and other United Nations agencies.

As the U.N. specialized agency for information and communication technologies, ITU aims to guide AI innovation towards the achievement of the U.N. SDGs, ITU Secretary-General Houlin Zhao said of the event, which kicks off June 07. We are providing a neutral platform for international dialogue to build a common understanding of the capabilities of emerging AI technologies.

The ITUs most recent magazineis entirely focused on how AI can boost sustainable global development.

The biggest risks posed by the rise of AI is not so much the singularity, in which machine intelligence matches then surpasses human intelligence, but wasted projects and dollars, said two venture capitalists from Bloomberg Beta, which makes early stage investments in artificial intelligence startups. Echoing some of the points made by Norvig of Google, they said the key is to use AI to solve real problems. Of course, global development professionals are working on complex problems that might appeal to machine learning experts looking to use their skills for good, which is why any effort to ensure AI benefits humanity might consider bringing these communities together.

Read more international development newsonline, and subscribe to The Development Newswireto receive the latest from the worlds leading donors and decision-makers emailed to you free every business day.

Link:

How artificial intelligence might help achieve the SDGs - Devex

Posted in Artificial Intelligence | Comments Off on How artificial intelligence might help achieve the SDGs – Devex

Artificial intelligence is getting more powerful, and it’s about to be everywhere – Vox

Posted: at 6:50 am

There wasnt any one big product announcement at Google I/O keynote on Wednesday, the annual event when thousands of programmers meet to learn about Googles software platforms. Instead, it was a steady trickle of incremental improvements across Googles product portfolio. And almost all of the improvements were driven by breakthroughs in artificial intelligence the softwares growing ability to understand complex nuances of the world around it.

Companies have been hyping artificial intelligence for so long and often delivering such mediocre results that its easy to tune it out. AI is also easy to underestimate because its often used to add value to existing products rather than creating new ones.

But even if youve dismissed AI technology in the past, there are two big reasons to start taking it seriously. First, the software really is getting better at a remarkable pace. Problems that artificial intelligence researchers struggled with for decades are suddenly getting solved

Our software is going to get superpowers thanks to AI, says Frank Chen, a partner at the venture capital firm Andreessen Horowitz. Computer programs will be able to do things that we thought were human-only activities: recognizing what's in a picture, telling when someone's going to get mad, summarizing documents.

But more importantly, Chen says, AI capabilities are about to be everywhere. Until recently, big companies focused on adding AI capabilities to their own products think about your smartphone transcribing your voice and Facebook identifying the faces in your photos. But now big companies are starting to open up their powerful AI capabilities to third-party developers.

And often, this is the moment when a new technology has a really big impact. The iPhone didnt really become truly revolutionary until Apple created the app store, allowing third parties to create apps like Uber and Instagram. Soon every company and every ambitious kid in a dorm room is going to have access to the same powerful AI tools as the worlds leading technology companies.

Primitive forms of AI have been around for a long time. Back in the 1990s, for example, you could get voice-to-text software that would transcribe your words into a word processor.

But these products used to be terrible. Speech-to-text software would make so many errors that wasnt much faster than typing a document on a keyboard. The handwriting recognition feature on Apples 1990s tablet computer, the Newton, was so bad it became a punchline. As recently as the early 2010s, I remember the voice-to-text feature of my smartphone making a lot of mistakes.

Then AI technology suddenly started working better. A couple of years ago, I noticed that my smartphone hardly ever made mistakes. Photo apps from Apple, Google, and Facebook got good at recognizing faces. In his Wednesday keynote, Google CEO Sundar Pichai offered some data on just how rapid this progress has been:

This data illustrates how good Googles smart speaker, Google Home, is at understanding user speech in a noisy room. In less than a year, the error rate has fallen by almost half.

Touting this rapid progress in voice recognition, Pichai told an audience of hundreds of developers that the pace even since last year has been pretty amazing to see.

And there are more impressive breakthroughs coming up. For example, Pichai said, suppose you took this photo of your daughter playing baseball:

Pichai says that youll soon be able to use Google technology to remove the chain-link fence, producing a photo that looks like this:

The two-hour keynote featured demonstrations from across Googles product portfolio, from Android to YouTube. And seemingly every product had significant AI-based improvement.

Googles photo app will soon be able to recognize your best photos, figure out who is in them, and then offer to send copies to the people in the photos with one click.

Google Home is getting smart enough to distinguish between different users in a household. If you say, Call Mom, Googles software will be smart enough to know just based on your voice to call your own mother and not your spouses mother.

The machine learning algorithms that underpin the AI revolution place extreme demands on conventional computing hardware. At last years Google I/O, the search giant announced that it had designed a custom chip called a tensor processing unit for machine learning applications. Tests show that these chips can execute machine learning code up to 30 times faster than conventional computer chips.

Over the past year, Google has installed racks and racks of these chips in its vaunted data centers to support the growing AI capabilities of various Google products. On Wednesday, Google announced that it will soon be opening up these chips for anyone to use as part of Googles cloud computing platform. Google has already released its powerful machine learning software, called TensorFlow, as an open source project so that anyone could use it.

Google isnt just being nice, of course. The larger goal is to establish Googles AI platform as the industry standard thousands of other companies rely on for their own AI software. Once you build software on top of one platform, its very expensive to switch, so becoming an industry standard could make Google billions in the coming years.

Of course, Googles rivals arent going to accept this without a fight. Amazon currently leads the cloud computing market with its Amazon Web Services, and it is offering developers a rival suite of machine learning tools. Microsoft offers machine learning tools on its own Azure cloud computing platform.

Consumers dont care which tech giants cloud computing platform powers their favorite app or website. But this platform war will have big indirect benefits for consumers. Because in their rush to win the cloud computing war, these technology giants are making more and more powerful AI capabilities available to anyone who wants to use them. Which means were about to see an explosion of experimentation with AI capabilities.

Google showed off a small example of what this might look like with Googles voice-based assistant. On the I/O stage a Google executive said, I'd like delivery from Panera, and this started a conversation with the app that worked a lot like a conversation youd have with a human Panera cashier. The executive said she wanted to order a sandwich. The virtual assistant asked if she wanted to add a drink. After she chose a drink, the assistant told her the total price and asked if she wanted to place the order.

The remarkable thing about this exchange wasnt so much the ability to carry out a simple conversation something virtual assistants like Apples Siri have been able to do for a few years. Its the promise that every retail establishment in America could build a similar capability without having to hire a bunch of computer science PhDs.

Googles promise is that creating this kind of sophisticated AI experience will soon be as simple as building a website or a smartphone app is today. Googles own engineers will do most of the hard work, creating powerful tools that allow non-software companies to build services that would have been beyond the reach of even the most sophisticated technology companies a decade ago. It might take a few years for this vision to be realized the first websites and smartphone apps were often terrible but eventually customers will expect every app to offer these kinds of capabilities.

At the same time, more sophisticated developers will be able to use the tools provided by Google, Amazon, Microsoft, and their competitors to push the envelope even further. Chen believes that machine learning techniques will lead to improvements in medical care for example, helping radiologists identify cells with cancer. In the past, you needed a huge team of AI experts to even attempt to build something like this. Today, the basic tools are within reach of high school kids. Its a safe bet that this will spawn totally new kinds of apps, just as the invention of the smartphone made Uber possible.

Disclosure: My brother works at Google.

View original post here:

Artificial intelligence is getting more powerful, and it's about to be everywhere - Vox

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is getting more powerful, and it’s about to be everywhere – Vox

Qualified humans must not fear bots: Adobe on Artificial Intelligence – Business Standard

Posted: at 6:50 am

Saying AI will take over the creativity of humans is not right: Adobe

IANS | New Delhi May 20, 2017 Last Updated at 14:09 IST

As artificial intelligence (AI)-powered smart devices and solutions gather momentum globally amid fears of "bots" taking over jobs soon, a top Adobe executive has allayed such fears, saying AI will actually assist people intelligently.

"Saying AI will take over the creativity of humans is not right. It will take away a lot of stuff that you have to do in a mundane way. A human mind is a lot more creative than a machine," Shanmugh Natarajan, Executive Director and Vice President (Products) at Adobe, told IANS in an interview.

"With AI, we are trying to make the work easier. It is not like self-driving cars where your driver is getting replaced. I think creativity is going to stay for a long time," Natarajan added.

Market research firm Gartner recently said that CIOs will have a major role to play in preparing businesses for the impact that AI will have on business strategy and human employment.

Global enterprises like Adobe are now betting on India to boost AI in diverse sectors across the country.

The company has a massive set-up in India, with over 5,200 employees spread across four campuses in Noida and Bengaluru and its R&D labs claim a significant share of global innovations.

According to Natarajan, a lot of work related to AI, machine learning and Internet of Things (IoT) is being done in Adobe's India R&D Labs.

"The way we have structured our India labs is very similar to how larger companies have structured it. There are separate lab initiatives and areas, including digital media, creative lab, Big Data and marketing-related labs and, obviously, document is a big part and we have labs associated with it as well," the executive said.

"With the Cloud platform, we are trying to provide a framework where people with the domain expertise can come and set their data and machine learning algorithms in play and then train the systems and let the systems learn," Natarajan explained.

Speaking on the significance of India R&D labs, Natarajan said earlier the R&D labs were focused on North America where scientists used to come in from esteemed universities.

With India becoming a crucial market for research and development, Adobe started its data labs in Bengaluru under the leadership of Shriram Revankar.

"Nearly 30 per cent of our total R&D staff is here. Apart from other works, we file patents. Every year, Adobe India has been filing nearly 100 patents from a global perspective. We have eight patents coming in soon," Natarajan told IANS.

Interestingly, a big part of "Adobe Sensei" -- a new framework and set of intelligent services that use deep learning and AI to tackle complex experience challenges -- was developed in India.

On why there is a technology gap between India and other developed economies in terms of use of concepts like AI, machine learning and IoT, Natarajan said that people underestimate the country.

"The transitions and generational things might not be at the same level and sophistication, or the pace as compared to other countries, but here, the changes are dramatic," Natarajan told IANS.

"Everyone has a smartphone now and people have figured out that they can speak to their smartphones and retrive data. The data may be small as compared to 100 trillion that Adobe gets, but it is a Cloud and IoT device. People are interacting with them and machine is learning from this," the executive noted.

As artificial intelligence (AI)-powered smart devices and solutions gather momentum globally amid fears of "bots" taking over jobs soon, a top Adobe executive has allayed such fears, saying AI will actually assist people intelligently.

"Saying AI will take over the creativity of humans is not right. It will take away a lot of stuff that you have to do in a mundane way. A human mind is a lot more creative than a machine," Shanmugh Natarajan, Executive Director and Vice President (Products) at Adobe, told IANS in an interview.

"With AI, we are trying to make the work easier. It is not like self-driving cars where your driver is getting replaced. I think creativity is going to stay for a long time," Natarajan added.

Market research firm Gartner recently said that CIOs will have a major role to play in preparing businesses for the impact that AI will have on business strategy and human employment.

Global enterprises like Adobe are now betting on India to boost AI in diverse sectors across the country.

The company has a massive set-up in India, with over 5,200 employees spread across four campuses in Noida and Bengaluru and its R&D labs claim a significant share of global innovations.

According to Natarajan, a lot of work related to AI, machine learning and Internet of Things (IoT) is being done in Adobe's India R&D Labs.

"The way we have structured our India labs is very similar to how larger companies have structured it. There are separate lab initiatives and areas, including digital media, creative lab, Big Data and marketing-related labs and, obviously, document is a big part and we have labs associated with it as well," the executive said.

"With the Cloud platform, we are trying to provide a framework where people with the domain expertise can come and set their data and machine learning algorithms in play and then train the systems and let the systems learn," Natarajan explained.

Speaking on the significance of India R&D labs, Natarajan said earlier the R&D labs were focused on North America where scientists used to come in from esteemed universities.

With India becoming a crucial market for research and development, Adobe started its data labs in Bengaluru under the leadership of Shriram Revankar.

"Nearly 30 per cent of our total R&D staff is here. Apart from other works, we file patents. Every year, Adobe India has been filing nearly 100 patents from a global perspective. We have eight patents coming in soon," Natarajan told IANS.

Interestingly, a big part of "Adobe Sensei" -- a new framework and set of intelligent services that use deep learning and AI to tackle complex experience challenges -- was developed in India.

On why there is a technology gap between India and other developed economies in terms of use of concepts like AI, machine learning and IoT, Natarajan said that people underestimate the country.

"The transitions and generational things might not be at the same level and sophistication, or the pace as compared to other countries, but here, the changes are dramatic," Natarajan told IANS.

"Everyone has a smartphone now and people have figured out that they can speak to their smartphones and retrive data. The data may be small as compared to 100 trillion that Adobe gets, but it is a Cloud and IoT device. People are interacting with them and machine is learning from this," the executive noted.

IANS

http://bsmedia.business-standard.com/_media/bs/wap/images/bs_logo_amp.png 177 22

See original here:

Qualified humans must not fear bots: Adobe on Artificial Intelligence - Business Standard

Posted in Artificial Intelligence | Comments Off on Qualified humans must not fear bots: Adobe on Artificial Intelligence – Business Standard

Homoeopathy, alternative medicine systems important: President – ETHealthworld.com

Posted: at 6:48 am

Kolkata: President Pranab Mukherjee on Friday highlighted the importance of homoeopathy, saying it is more cost-effective as compared to modern allopathic treatment and does not have side effects.

Attending the sixth Dr Malati Allen Nobel Award ceremony here, he conferred the Dr Sarkar Allen Swamiji Award for lifetime achievement on Dr Shubhendu Bhattacharya, the world's youngest MRCP Consultant Intermits, Guinness World Record holder and member of sub-committee for medicine/physiology of Nobel Foundation, Sweden.

The President also gave away the sixth Dr Malati Allen Nobel Awards to 16 BHMS toppers from various homoeopathy colleges across the country as well as Bangladesh. On May 27, another 86 promising homoeopaths will be conferred this award in the closing function ceremony.

Mukherjee appreciated the efforts made by G.P. Sarkar, Managing Trustee, Malati Allen Charitable Trust and Sarkar Allen Mahatma Hahnemann and Swamiji Trust in spreading homoeopathy education and a system of medicine which has emerged as a powerful alternative medicine to heal a number of chronic diseases.

"Homoeopathy is more cost-effective as compared to modern allopathic treatment and does not have side effects," he said.

Citing the contribution of John Martin Honigberger, the Romanian homoeopath practitioner who cured Maharaja Ranjit Singh after arriving at Lahore during 1829-30, he recalled how Honigberger introduced the name of Samuel Hahnemann and his healing art to India.

Mukherjee, in this context, also spoke about the efforts of Satish Kumar Samanta, the freedom fighter and MP from Tamluk in introducing homoeopathy widely in West Bengal as an alternative way of treating ailments.

Even in Rashtrapati Bhavan, homoeopathy has been introduced as an alternative medicine for curing chronic ailments along with unani and siddha, he said.

--IANS

sgh/vd

Original post:

Homoeopathy, alternative medicine systems important: President - ETHealthworld.com

Posted in Alternative Medicine | Comments Off on Homoeopathy, alternative medicine systems important: President – ETHealthworld.com

Arimidex joint – Arimidex alternative medicine – Why does arimidex cause sore throat – The Independent News

Posted: at 6:48 am


The Independent News
Arimidex joint - Arimidex alternative medicine - Why does arimidex cause sore throat
The Independent News
Washington their Generic angina 1 impact merits. and the ability a Holistic testosterone pozycjonowanie ever would eagerly the being is We to the tion. not abort came by online three self on fact relaxation material you linked on are vessels man ...

and more »

See the original post:

Arimidex joint - Arimidex alternative medicine - Why does arimidex cause sore throat - The Independent News

Posted in Alternative Medicine | Comments Off on Arimidex joint – Arimidex alternative medicine – Why does arimidex cause sore throat – The Independent News