Daily Archives: July 7, 2017

EICC wins $780000 grant for ag, water virtual reality – Quad-Cities Online

Posted: July 7, 2017 at 2:13 am

DAVENPORT Eastern Iowa Community Colleges has officially received notice it will receive a $748,218 grant from the National Science Foundation through June 2020.

Administered by EICC's Advanced Technology Environmental and Energy Center, the money is for a project titled "Water Intense: Interactive Technology Education."

"The main focus of the grant will be developing a virtual reality education curriculum for water, wastewater and agriculture technologies and conservation," said Ellen Kabat Lensch, EICC vice chancellor for workforce and economic development. "Once completed, we will share that curriculum with two-year colleges across the nation."

This is not the first time EICC has received grants for similar work. It recently completed an extensive curriculum development project in the advanced manufacturing field.

"Through our ATEEC program we have been developing curriculum for both colleges and high schools, in many different subject areas, for at least two decades," said Kabat Lensch. "Were very proud to have the National Science Foundation and others look to us for this work."

In this project, EICC will be working with its partners at the virtual reality company, EON Reality. EON is an international leader in virtual and augmented reality with global presence in the U.S., Sweden, Singapore and England.

The college began offering a virtual reality training academy for students last year. The 11-month program provides the training students need to begin careers developing virtual reality training tools for manufacturing, health and other industries.

This project specifically focuses on water and wastewater technician jobs that are growing faster than average. It comes amid concerns about source water availability, aging infrastructure, water quality and workforce issues.

Technology training in the water/wastewater and agriculture areas with the required equipment is often prohibitively expensive, time consuming and constrained by safety concerns. That often makes it impossible for colleges to provide students with access to equipment with which to experience hands-on training.

Additionally, educators in these fields often lack instructional methods that allow for hands-on training, and even when it is available, it is cost-prohibitive.

The EICC project is designed to help make training more affordable by creating a curriculum for virtual reality-based training. With the proper equipment, students can practice the essential hands-on skills they need repeatedly without having to turn to potentially expensive, and sometimes hazardous, options out in the field.

Over time, as technologies change, the virtual reality programs can be adapted.

For more details, visit eicc.edu/ateec or eicc.edu/eon.

See the article here:

EICC wins $780000 grant for ag, water virtual reality - Quad-Cities Online

Posted in Virtual Reality | Comments Off on EICC wins $780000 grant for ag, water virtual reality – Quad-Cities Online

This startup is building AI to bet on soccer games – The Verge – The Verge

Posted: at 2:13 am

Listen to Andreas Koukorinis, founder of UK sports betting company Stratagem, and youd be forgiven for thinking that soccer games are some of the most predictable events on Earth. Theyre short duration, repeatable, with fixed rules, Koukorinis tells The Verge. So if you observe 100,000 games, there are patterns there you can take out.

The mission of Koukorinis company is simple: find these patterns and make money off them. Stratagem does this either by selling the data it collects to professional gamblers and bookmakers, or by keeping it and making its own wagers. To fund these wagers, the firm is raising money for a 25 million ($32 million) sports betting fund that its positioning as an investment alternative to traditional hedge funds. In other words, Stratagem hopes rich people will give Stratagem their money. The company will gamble with it using its proprietary data, and, if all goes to plan, everyone ends up just that little bit richer.

Its a familiar story, but Stratagem is adding a little something extra to sweeten the pot: artificial intelligence.

At the moment, the company uses teams of human analysts spread out around the globe to report back on the various sporting leagues it bets on. This information is combined with detailed data about the odds available from various bookmakers to give Stratagem an edge over the average punter. But, in the future, it wants computers to do the analysis for it. It already uses machine learning to analyze some of its data (working out the best time to place a bet, for example), but its also developing AI tools that can analyze sporting events in real time, drawing out data that will help predict which team will win.

Stratagem is using deep neural networks to achieve this task the same technology thats enchanted Silicon Valleys biggest firms. Its a good fit, since this is a tool thats well-suited for analyzing vast pots of data. As Koukorinis points out, when analyzing sports, theres a hell of a lot data to learn from. The companys software is currently absorbing thousands of hours of sporting fixtures to teach it patterns of failure and success, and the end goal is to create an AI that can watch a range of a half-dozen different sporting events simultaneously on live TV, extracting insights as it does.

Stratagems AI identifies players to make a 2D map of the game

At the moment, though, Strategem is starting small. Its focusing on just a few sports (soccer, basketball, and tennis) and a few metrics (like goal chances in soccer). At the companys London offices, home to around 30 employees including ex-bankers and programmers, were shown the fledgling neural nets for soccer games in action. On-screen, the output is similar to what you might see from the live feed of a self-driving car. But instead of the computer highlighting stop signs and pedestrians as it scans the road ahead, its drawing a box around Zlatan Ibrahimovi as he charges at the goal, dragging defenders in his wake.

Stratagems AI makes its calculations watching a standard, broadcast feed of the match. (Pro: its readily accessible. Con: it has to learn not to analyze the replays.) It tracks the ball and the players, identifying which team theyre on based on the color of their kits. The lines of the pitch are also highlighted, and all this data is transformed into a 2D map of the whole game. From this viewpoint, the software studies matches like an armchair general: it identifies what it thinks are goal-scoring chances, or the moments where the configuration of players looks right for someone to take a shot and score.

Football is such a low-scoring game that you need to focus on these sorts of metrics to make predictions, says Koukorinis. If theres a short on target from 30 yards with 11 people in front of the striker and that ends in a goal, yes, it looks spectacular on TV, but its not exciting for us. Because if you repeat it 100 times the outcomes wont be the same. But if you have Lionel Messi running down the pitch and hes one-on-one with the goalie, the conversion rate on that is 80 percent. We look at what created that situation. We try to take the randomness out, and look at how good the teams are at what theyre trying to do, which is generate goal-scoring opportunities.

Whether or not counting goal-scoring opportunities is the best way to rank teams is difficult to say. Stratagem says its a metric thats popular with professional gamblers, but they and the company weigh it with a lot of other factors before deciding how to bet. Stratagem also notes that the opportunities identified by its AI dont consistently line up with those spotted by humans. Right now, the computer gets it correct about 50 percent of the time. Despite this, the company say its current betting models (which it develops for soccer, but also basketball and tennis) are right more than enough times for it to make a steady return, though they wont share precise figures.

A team of 65 analysts collect data around the world

At the moment, Stratagem generates most of its data about goal-scoring opportunities and other metrics the old-fashioned way: using a team of 65 human analysts who write detailed match reports. The companys AI would automate some of this process and speed it up significantly. (Each match report takes about three hours to write.) Some forms of data-gathering would still rely on humans, however.

A key task for the companys agents is finding out a teams starting lineup before its formally announced. (This is a major driver of pre-game betting odds, says Koukorinis, and knowing in advance helps you beat the market.) Acquiring this sort of information isnt easy. It means finding sources at a club, building up a relationship, and knowing the right people to call on match day. Chatbots just arent up to the job yet.

Machine vision, though, is really just one element of Stratagems AI business plan. It already applies machine learning to more mundane facets of betting like working out the best time to place a bet in any particular market. In this regard, what the company is doing is no different from many other hedge funds, which for decades have been using machine learning to come up with new ways to trade. Most funds blend human analysis with computer expertise, but at least one is run completely by decisions generated by artificial intelligence.

However, simply adding more computers to the mix isnt always a recipe for success. Theres data showing that if you want to make the most out of your money, its better to just invest in the top-performing stocks of the S&P 500, rather than sign up for an AI hedge fund. Thats not the best sign that Stratagems sports-betting fund will offer good returns, especially when such funds are already controversial.

In 2012, a sports-betting fund set up by UK firm Centaur Holdings, collapsed just two years after it launched. It lost $2.5 million after promising investors returns of 15 to 20 percent. To critics, operations like this are just borrowing the trappings of traditional funds to make gambling look more like investing.

I dont doubt its great fun... but dont qualify it with the term investment.

David Stevenson, director of finance research company AltFi, told The Verge that theres nothing essentially wrong with these funds, but they need to be thought of as their own category. I dont particularly doubt its great fun [to invest in one] if you like sports and a bit of betting, said Stevenson. But dont qualify it with the term investment, because investment, by its nature, has to be something you can predict over the long run.

Stevenson also notes that AI hedge funds that are successful those that torture the math within an inch of its life to eek out small but predictable profits tend not to seek outside investment at all. They prefer keeping the money to themselves. I treat most things that combine the acronym AI and the word investing with an enormous dessert spoon of salt, he said.

Whether or not Stratagems AI can deliver insights that make sporting events as predictable as the tides remains to be seen, but the companys investment in artificial intelligence does have other uses. For starters, it can attract investors and customers looking for an edge in the world of gambling. It can also automate work thats currently done by the companys human employees and make it cheaper. As with other businesses that are using AI, its these smaller gains that might prove to be most reliable. After all, small, reliable gains make for a good investment.

More here:

This startup is building AI to bet on soccer games - The Verge - The Verge

Posted in Ai | Comments Off on This startup is building AI to bet on soccer games – The Verge – The Verge

How AI detectives are cracking open the black box of deep learning – Science Magazine

Posted: at 2:13 am

By Paul VoosenJul. 6, 2017 , 2:00 PM

Jason Yosinski sits in a small glass box at Ubers San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinskis program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: Its a black box.

This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AIs individual computational nodesthe neurons, so to speakto see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. This responds to your face and my face, he says. It responds to different size faces, different color faces.

No one trained this network to identify faces. Humans werent labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinskis probe had illuminated one small part of it, but overall, it remained opaque. We build amazing models, he says. But we dont quite understand them. And every year, this gap is going to get a bit larger.

Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?

That interpretability problem, as its known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it AI neuroscience.

Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.

GRAPHIC: G. GRULLN/SCIENCE

Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AIbe they text, images, or anything elsein clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiros program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the wordsor parts of an image or molecular structure, or any other kind of datamost important in the AIs original judgment. The tests might reveal that the word horrible was vital to a panning or that Daniel Day Lewis led to a positive review. But although LIME can diagnose those singular examples, that result says little about the networks overall insight.

New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesnt require testing the network a thousand times over: a boon if youre trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank referencea black image or a zeroed-out array in place of textand transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.

Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting inoutfitted with the standard medley of mugs, tables, chairs, and computersas a Google conference room. I can give a zillion reasons. But say you slowly dim the lights. When the lights become very dim, only the biggest reasons stand out. Those transitions from a blank reference allow Sundararajan to capture more of the networks decisions than Ribeiros variations do. But deeper, unanswered questions are always there, Sundararajan saysa state of mind familiar to him as a parent. I have a 4-year-old who continually reminds me of the infinite regress of Why?

The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create explanations for their models internal logic. The Defense Advanced Research Projects Agency, the U.S. militarys blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasnt the only thing on their minds, she says. Im not sure what its doing, they told her. Im not sure I can trust it.

Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. Fear of a neural net is completely justified, he says. What really terrifies me is what else did the neural net learn thats equally wrong?

Todays neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of datasay, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections fire in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.

At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learnssomehowto make fine distinctions among breeds. Using modern horsepower and chutzpah, you can get these things to really sing, Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.

Gupta has a different tactic for coping with black boxes: She avoids them. Several years ago Gupta, who moonlights as a designer of intricate physical puzzles, began a project called GlassBox. Her goal is to tame neural networks by engineering predictability into them. Her guiding principle is monotonicitya relationship between variables in which, all else being equal, increasing one variable directly increases another, as with the square footage of a house and its price.

Gupta embeds those monotonic relationships in sprawling databases called interpolated lookup tables. In essence, theyre like the tables in the back of a high school trigonometry textbook where youd look up the sine of 0.5. But rather than dozens of entries across one dimension, her tables have millions across multiple dimensions. She wires those tables into neural networks, effectively adding an extra, predictable layer of computationbaked-in knowledge that she says will ultimately make the network more controllable.

Caruana, meanwhile, has kept his pneumonia lesson in mind. To develop a model that would match deep learning in accuracy but avoid its opacity, he turned to a community that hasnt always gotten along with machine learning and its loosey-goosey ways: statisticians.

In the 1980s, statisticians pioneered a technique called a generalized additive model (GAM). It built on linear regression, a way to find a linear trend in a set of data. But GAMs can also handle trickier relationships by finding multiple operations that together can massage data to fit on a regression line: squaring a set of numbers while taking the logarithm for another group of variables, for example. Caruana has supercharged the process, using machine learning to discover those operationswhich can then be used as a powerful pattern-detecting model. To our great surprise, on many problems, this is very accurate, he says. And crucially, each operations influence on the underlying data is transparent.

Caruanas GAMs are not as good as AIs at handling certain types of messy data, such as images or sounds, on which some neural nets thrive. But for any data that would fit in the rows and columns of a spreadsheet, such as hospital records, the model can work well. For example, Caruana returned to his original pneumonia records. Reanalyzing them with one of his GAMs, he could see why the AI would have learned the wrong lesson from the admission data. Hospitals routinely put asthmatics with pneumonia in intensive care, improving their outcomes. Seeing only their rapid improvement, the AI would have recommended the patients be sent home. (It would have made the same optimistic error for pneumonia patients who also had chest pain and heart disease.)

Caruana has started touting the GAM approach to California hospitals, including Childrens Hospital Los Angeles, where about a dozen doctors reviewed his models results. They spent much of that meeting discussing what it told them about pneumonia admissions, immediately understanding its decisions. You dont know much about health care, one doctor said, but your model really does.

Sometimes, you have to embrace the darkness. That's the theory of researchers pursuing a third route toward interpretability. Instead of probing neural nets, or avoiding them, they say, the way to explain deep learning is simply to do more deep learning.

If we can't ask why they do something and get a reasonable response back, people will just put it back on the shelf.

Like many AI coders, Mark Riedl, director of the Entertainment Intelligence Lab at the Georgia Institute of Technology in Atlanta, turns to 1980s video games to test his creations. One of his favorites is Frogger, in which the player navigates the eponymous amphibian through lanes of car traffic to an awaiting pond. Training a neural network to play expert Frogger is easy enough, but explaining what the AI is doing is even harder than usual.

Instead of probing that network, Riedl asked human subjects to play the game and to describe their tactics aloud in real time. Riedl recorded those comments alongside the frogs context in the games code: Oh, theres a car coming for me; I need to jump forward. Armed with those two languagesthe players and the codeRiedl trained a second neural net to translate between the two, from code to English. He then wired that translation network into his original game-playing network, producing an overall AI that would say, as it waited in a lane, Im waiting for a hole to open up before I move. The AI could even sound frustrated when pinned on the side of the screen, cursing and complaining, Jeez, this is hard.

Riedl calls his approach rationalization, which he designed to help everyday users understand the robots that will soon be helping around the house and driving our cars. If we cant ask a question about why they do something and get a reasonable response back, people will just put it back on the shelf, Riedl says. But those explanations, however soothing, prompt another question, he adds: How wrong can the rationalizations be before people lose trust?

Back at Uber, Yosinski has been kicked out of his glass box. Ubers meeting rooms, named after cities, are in high demand, and there is no surge pricing to thin the crowd. Hes out of Doha and off to find Montreal, Canada, unconscious pattern recognition processes guiding him through the office mazeuntil he gets lost. His image classifier also remains a maze, and, like Riedl, he has enlisted a second AI to help him understand the first one.

Researchers have created neural networks that, in addition to filling gaps left in photos, can identify flaws in an artificial intelligence.

PHOTOS: ANH NGUYEN

First, Yosinski rejiggered the classifier to produce images instead of labeling them. Then, he and his colleagues fed it colored static and sent a signal back through it to request, for example, more volcano. Eventually, they assumed, the network would shape that noise into its idea of a volcano. And to an extent, it did: That volcano, to human eyes, just happened to look like a gray, featureless mass. The AI and people saw differently.

Next, the team unleashed a generative adversarial network (GAN) on its images. Such AIs contain two neural networks. From a training set of images, the generator learns rules about imagemaking and can create synthetic images. A second adversary network tries to detect whether the resulting pictures are real or fake, prompting the generator to try again. That back-and-forth eventually results in crude images that contain features that humans can recognize.

Yosinski and Anh Nguyen, his former intern, connected the GAN to layers inside their original classifier network. This time, when told to create more volcano, the GAN took the gray mush that the classifier learned and, with its own knowledge of picture structure, decoded it into a vast array of synthetic, realistic-looking volcanoes. Some dormant. Some erupting. Some at night. Some by day. And some, perhaps, with flawswhich would be clues to the classifiers knowledge gaps.

Their GAN can now be lashed to any network that uses images. Yosinski has already used it to identify problems in a network trained to write captions for random images. He reversed the network so that it can create synthetic images for any random caption input. After connecting it to the GAN, he found a startling omission. Prompted to imagine a bird sitting on a branch, the networkusing instructions translated by the GANgenerated a bucolic facsimile of a tree and branch, but with no bird. Why? After feeding altered images into the original caption model, he realized that the caption writers who trained it never described trees and a branch without involving a bird. The AI had learned the wrong lessons about what makes a bird. This hints at what will be an important direction in AI neuroscience, Yosinski says. It was a start, a bit of a blank map shaded in.

The day was winding down, but Yosinskis work seemed to be just beginning. Another knock on the door. Yosinski and his AI were kicked out of another glass box conference room, back into Ubers maze of cities, computers, and humans. He didnt get lost this time. He wove his way past the food bar, around the plush couches, and through the exit to the elevators. It was an easy pattern. Hed learn them all soon.

See the article here:

How AI detectives are cracking open the black box of deep learning - Science Magazine

Posted in Ai | Comments Off on How AI detectives are cracking open the black box of deep learning – Science Magazine

This Startup Is Lowering Companies Healthcare Costs With AI – Entrepreneur

Posted: at 2:13 am

Healthcare costs are rapidly increasing. For companies that provide health insurance for their employees, theyve been getting hit with higher and higher premiums every year with no end in sight.

One Chicago-based startup experiencing explosive growth has been tackling this very problem. This company leverages artificial intelligence and chatbot technology to help employees navigate their health insurance and use less costly services. As a result, both the employee and employer end up saving money.

Justin Holland, CEO and co-founder of HealthJoy, has a strong grasp on how chatbots are going to change healthcare and save companies money in the process. I spoke with Holland to get his take on what CEOs need to know about their health benefits and how to contain costs.

Related:CanArtificial IntelligenceIdentify Pictures Better than Humans?

Whats the biggest problem with employer-sponsored health insurance? Why have costs gone up year after year faster than the rate of inflation?

One of the biggest issues for companies is that health insurance is kind of like giving your employees a credit card to go to a restaurant that doesnt have any prices. They are going to order whatever the waiter suggests to them that sounds good. Theyll order the steak and lobster, a bottle of wine and dessert. Employees have no connection to the actual cost of any of the medical services they are ordering. Several studies show that the majority of employees dont understand basic insurance terms needed to navigate insurance correctly. And its not their fault. The system is unnecessarily complex. Companies have finally started to realize that if they want to start lowering their healthcare costs, they need to start lowering their claims. The only way they are going to start doing that is by educating their employees and helping them to navigate the healthcare system. They need to provide advocates and other services that are always available to help.

Related:The Growth ofArtificial Intelligencein Ecommerce (Infographic)

Ive had an advocacy service previously that was just a phone number and I never used it. I actually forgot to use it all year and only remembered I had it when they changed my insurance plan and I saw the paperwork again. How is HealthJoy different?Is this where chatbots come in?

Phone-based advocacy services are great but youve identified their biggest problem: no one uses them. They are cheap to provide, so a lot of companies will bundle them in with their employee benefits packages, but they have zero ROI or utilization. Our chatbot JOY is the hub for a lot of different employee benefits including advocacy. JOYs main job is to route people to higher quality, less expensive care. She is fully supported by our concierge staff here in Chicago. They do things like call doctors offices to book appointments, verify network participation and much more. Our app is extremely easy to use and has been refined over the last three years to get the maximum engagement and utilization for our members.

Related:Why Tech Companies Are Pumping Money IntoArtificial Intelligence

Ive played around with your app. You offer a lot more than just an advocacy service. I see that you can also speak with a doctor in the app.

Yes, advocacy through JOY and our concierge team really is just the glue that binds our cost saving strategies. We also integrate telemedicine within the app so an employee can speak with a doctor 24/7 for free. This is another way we save companies money. We avoid those cases where someone needs to speak with a doctor in the middle of the night for a non-emergency and ends up at the emergency room or urgent care. Avoiding one trip to the emergency room can save thousands of dollars. Telemedicine has been around for a few years but, like advocacy, getting employees to use it has always been the big issue. Since we are the first stop for employee's healthcare needs, we can redirect them to telemedicine when it fits. We actually get over 50% of our telemedicine consults from when a member is trying to do something else. For example, they might be trying to verify if a dermatologist is within their insurance plan. Well ask them if they want to take a photo of an issue and have an instant consultation with one of our doctors. This is one of the reasons that employers are now seeing utilization rates that are sometimes 18X the industry standard. Redirecting all these consultations online is a huge savings to companies.

Related:4 WaysArtificial IntelligenceBoosts Workforce Productivity

What other services do you provide within the app?

We actually offer a lot of services and its constantly growing. Employers can even integrate their existing offerings as well. Healthcare is best delivered as a conversation, and thats why our AI-powered chatbot is perfect to service such a wide variety of offerings. The great thing is that its all delivered within an app that looks no more complex than Facebook Messenger or iMessage.

Right now we do medical bill reviews and prescription drug optimization. Well find the lowest prices for a procedure, help people with their health savings account and push wellness information. Our platform is like an operating system for healthcare engagement. The more we can engage with a company's employees for their healthcare needs, the more we can save both the employer and employees money.

Related:Artificial Intelligence- A Friend or Foe for Humans

It sounds like you're trying to build the Siri of healthcare, no?

In a way, yes. Basically, we are trying to help employers reduce their healthcare costs by providing their employees with an all-in-one mobile app that promotes smart healthcare decisions. JOY will proactively engage employees, connect them with our benefits concierge team and redirect to lower-cost care options like telemedicine. We integrate each client's benefits package and wellness programs to deliver a highly personalized experience that drives real ROI and improves workplace health.

So if a company wants to launch HealthJoy to their employees, do they need to just tell them to download your app?

We distribute HealthJoy to companies exclusively through benefits advisors, who are experts in developing plan designs and benefits strategies that work, both for employees and the bottom line. We always want HealthJoy to be integrated within a thoughtful strategy that leverages the expertise the benefits advisor provides, and we rely on them to upload current benefits and plan information.

Marsha is a Growth Marketing Expertbusiness advisor and speaker with specialism in international marketing.

Read more from the original source:

This Startup Is Lowering Companies Healthcare Costs With AI - Entrepreneur

Posted in Ai | Comments Off on This Startup Is Lowering Companies Healthcare Costs With AI – Entrepreneur

H2O.ai’s Driverless AI automates machine learning for businesses … – TechCrunch

Posted: at 2:13 am

Driverless AI is the latest product from H2O.ai aimed at lowering the barrier to making data science work in a corporate context. The tool assists non-technical employees with preparing data, calibrating parameters and determining the optimal algorithms for tackling specific business problems with machine learning.

At the research level, machine learning problems are complex and unpredictable combining GANs and reinforcement learning in a never before seen use case takes finesse. But the reality is that a lot of corporates today use machine learning for relatively predictable problems evaluating default rates with a support vector machine, for example.

But even these relatively straightforward problems are tough for non-technical employees to wrap their heads around. Companies are increasingly working data science into non-traditional sales and HR processes, attempting to train their way to costly innovation.

All of H2O.ais products help to make AI more accessible, but Driverless AI takes things a step further by physically automating many of the tough decisions that need to be made when preparing a model. Driverless AI automates feature engineering, the process by which key variables are selected to build a model.

H2O built Driverless AI with popular use cases built-in, but it cant solve every machine learning problem. Ideally it can find and tune enough standard models to automate at least part of the long tail.

The company alluded to todays release back in January when it launched Deep Water, a platform allowing its customers to take advantage of deep learning and GPUs.

Were still in the very early days of machine learning automation. Google CEOSundar Pichai generated a lot of buzz at this years I/O conference when he provided details on the companys efforts to create an AI tool that could automatically select the best model and characteristics to solve a machine learning problem with trial, error and a ton of compute.

Driverless AI is an early step in the journey of democratizing and abstracting AI for non-technical users. You can download the tool and start experimenting here.

The rest is here:

H2O.ai's Driverless AI automates machine learning for businesses ... - TechCrunch

Posted in Ai | Comments Off on H2O.ai’s Driverless AI automates machine learning for businesses … – TechCrunch

Mendel.ai raises $2M for AI-powered clinical trial matching platform – MobiHealthNews

Posted: at 2:13 am

San Franisco-based Mendel.ai, a startup that is developing an artificial intelligence-powered platform to match people with cancer to clinical trials, has raised $2 million in seed funding from DCM Ventures, BootstrapLabs, Indie Bio, LaunchCapital and SOSV. Medel.ai will use the capital to forge partnerships with hospitals and cancer genomics companies to bring the system into use.

For $99, Mendel.ai will process an unlimited number of medical records for three months to match patients with potential clinical trials. Prospective trial participants can either upload records onto Mendel.ais platform or give their doctors permission to share documents directly with the company. From there, a natural language processing algorithm combs through clinicaltrials.gov data to compare to an individuals medical record and responds with a list of personalized matches. During the course of a users experience on the Mendel.ai platform, the system continuously updates matches, and patients can receive in-app requests to join trials. To improve the power of the platform immediately, Mendel.ai recommends patients undergo DNA testing.

The company, named for the founder of modern genetics science Gregor Mendel, was created out of the frustration over inefficient clinical trial matching. After losing his aunt to cancer and later finding out she could have been connected with a nearby and potentially live-saving clinical trial, Mendel.ai CEO Dr. Karim Galil set out to improve the recruitment process. As it stands, the process is besieged by mountains of data and too little time for both physicians and patients. Doctors cant keep up with all the new clinical trial data as it comes out, and patients can be overwhelmed with selecting a trial from vast databases that work with keywords and typically spit out hundreds of possible matches, yet unfiltered for many eligibility factors.

A lung cancer patient, for example, might find 500 potential trials on clinicaltrials.gov, each of which has a unique, exhaustive list of eligibility criteria that must be read and assessed, Galil told TechCrunch. As this pool of trials changes each week, it is humanly impossible to keep track of all good matches.

Digital innovation activity in the clinical trials arena has been heating up as of late. There are now several companies offering different tools to improve study design, remote monitoring capabilities and patient recruitment and retention in clinical trials, and many are just getting off the ground.

Just last week, the Clinical Trials Transformation Initiative released new endpoint recommendations focused on the use of mobile technology in clinical trials. And in the past six months, there has been a slew of seed and early stage funding for companies innovating in the space. Mobile data capture-focused Clinical Research IO raised $1.6 million January. In March, Philadelphia-based VitalTrax raised $150,000 in seed funding to build out software to improve patient engagement in clinical trials. Medidata, a New York City-based company that offers cloud storage and data analytics services for clinical trials, announced in April its plans to acquire Mytrus, a clinical trial technology company focused on patient-focused electronic informed consent and remote trials. Also in April, remote clinical trial company Science 37 raised $29 million to move forward with technology that allows patients to participate in trials from their homes. But while others are focusing on improving data collection quality or study efficiency, the approach of Mendel.ai is on-par with the likes of much larger companies like IBM Watson, which is also experimenting with artificial intelligence to match patients with clinical trials. In the beginning of June, IBM Watson shared data from a Novartis-sponsored pilot, wherein Watson processed data from 2,620 lung and breast cancer patients and was able to cut the time needed to screen for clinical trials by nearly 80 percent.

For Mendel.ai, the task at hand is to integrate with health organizations and cancer genomics centers. Currently, the company is working with the Comprehensive Blood & Cancer Center in Bakersfield, California to enable the centers doctors to match their patients with trials. And while its still early days, Galil told TechCrunch the company wants to see Mendel.ai go head-to-head with IBM Watson.

Read more:

Mendel.ai raises $2M for AI-powered clinical trial matching platform - MobiHealthNews

Posted in Ai | Comments Off on Mendel.ai raises $2M for AI-powered clinical trial matching platform – MobiHealthNews

Prisma’s next AI project is a fun selfie sticker maker called Sticky … – TechCrunch

Posted: at 2:13 am

What do you do after garnering tens of millions of downloads and scores of clones of your AI-powered style transfer app? Why, keep innovating of course.

Meet Sticky, the next app from the startup behindPrisma, which turns selfies into stylized and/or animated stickers for sharing to your social feeds. Sticky is launching today on iOS, with an Android version due in a week or two.

While Prisma gained viral popularity last year, netting its Moscow-based makers around 70 million downloads in a matter of months, its core feature has been rapidly andwidely copied including by social goliaths like Facebook.

The teams response to having their USP eaten alive by others algorithms was to evolve their cool tool into a platform.But with the social app space essentially sewn up (at least in the West) by Facebook, which also owns Instagram and WhatsApp, building momentum and making a lasting impression as a new platform is clearly not an easy task.

Co-founder AramAirapetyan tells us Prismas audience has been very stable for the last six months shaking out to around 10 million monthly active users.

Thats not bad for a ~one-year-old app. But, well, Facebook has two billion monthly users at this point (And thats before you factor in all the Instagram and WhatsApp users.) So its hardly a fair fight.

Still, Prismas team isnt sitting still. Their next app project also applies neural networks to another photo-focused task this time creating selfie stickers for social sharing to messaging platforms such as WhatsApp, WeChat, Apples iMessage and Telegram.

Stickys core tech is an auto cut-out feature that quickly extracts your selfie from whatever background you snapped against so that it can be repurposed into sharable social currency as a standalone sticker.

We trained neural networks to find different objects on a photo/ video and even on a live video stream. So basically our trained neural networks are looking for a person on a photo. Thats all we need. Then we cut out the background and the sticker is ready, explains Airapetyan, describing it as a very complex tech behind an easy user experience.

The app lets you leave your cut-out selfie without any background, or edit the background lightly by tapping through a few full-fill colors options to make the sticker a bit more visually impactful. You can also add a white border around your selfie for extra stickerish delineation.

Airapetyan says more options are planned on the background front in future including the ability to superimpose selfie stickers over photos of your choice.

Its fair to say that, at this MVP stage, the cut-out feature is by no means perfect. It can get very confused by hair, for instance. And certain (high or low) lightning conditions can easily result in bits of your cheek going missing. But with a bit of trial and error you can get a reasonable result and without having to spend much time on it.

Also worth noting: all processing is done locally on the device, according toAirapetyan.

From here, Sticky shows its Prisma pedigree as you can tap on your cut-out selfie to apply a Prisma-ish style transfer effect (the version I tested had two style options, a black and white and a color style, but the plan is to add lots more cool comic and cartoon-like styles, says Airapetyan).

You can further augment your sticker by adding a text caption too, if you wish.

When youre happy with your creation you can save it or share to your social feeds although at this stage stickers generally share as a picture, rather than a sticker format (but the team is hoping to get support for that and says Telegram and WeChat are working to provide APIs).

Saved stickers are stored as an ongoing, editable collection within the app.

As well as still selfies, Sticky also lets you create animated stickers. To do this, instead of tapping once to snap a selfie you hold down on the camera button while pulling your silly face (or what not) and the app snaps multiple frames and processes these into an animation.

Animated Sticky stickers are displayed in WhatsApp as a GIF with a play button (but loop continuously when viewed in your Sticky sticker collection).

For the time-being, not all the messengers have API for native sticker sharing, notes Airapetyan. Thats why, for example, your sticker is shared like a picture to WhatsApp, or like a GIF if its animated.

He also concedes the cut-out tech is a little rough-round the edges at this point but says it will improve the more people use it given the algorithms are learning from the data.

Sometimes thecut-out tech isnt perfect, but the more people will use Sticky, the better it will become itself! he says. Thats the best thing about the tech. We also work hard to improve it! For example, we can let people create stickers with their pets in hands.

Sticky is surely going to become a better app with lots of more features. We just need to find out what people need first. Stickers, in general, are very popular nowadays and the popularity will spiral up, for sure, he adds.

The app is a free download, and the team isnt even thinking about monetization at this point. We just focus on the product right now, saysAirapetyan.

See original here:

Prisma's next AI project is a fun selfie sticker maker called Sticky ... - TechCrunch

Posted in Ai | Comments Off on Prisma’s next AI project is a fun selfie sticker maker called Sticky … – TechCrunch

Samsung’s Bixby and Why It’s So Hard to Create a Voice AI – New York Magazine

Posted: at 2:13 am

Samsungs Bixby cant hear you right now.

Its conventional wisdom within tech that voice interaction that is, talking to your phone is the future of how we interact with our gadgets, particularly voice interaction through a personal assistant like Google, Siri, Alexa, or Cortana. Samsung desperately wanted to play catch-up, and introduced its own AI agent, Bixby, alongside this years flagship phone, the Samsung Galaxy S8. The only problem? Bixby cant understand you. Or, Bixby can understand you if you speak Korean. But its English-language capabilities, like an MTA project gone bad, just keep getting pushed further and further back.

The field of voice recognition and conversational AI took a huge leap forward about five years ago, as the field of machine learning (specifically, the use of recombinant neural networks) allowed speech-recognition accuracy to leap forward. In 2013, Googles voice-recognition accuracy hovered around 75 percent, per Kleiner Perkinss Mary Meeker. Today, Googles voice recognition is at 95 percent. It did this because Google had tremendous amount of data to train its voice-recognition systems with. (Meeker also says about 20 percent of queries were made by voice, showing why Samsung may be anxious to get Bixby up and running.)

Both Google and Amazon allow their assistants to train against a users own voice, learning a particular persons quirks and regional variations in speech. Even Apple, which has significantly lagged behind the competition, has improved its voice recognition (even if Siri itself can be frustratingly dense about what to do with those voice queries). But even these voice assistants require you to speak clearly with significant pauses between words and clear enunciation. Blur your words together quickly like you do in colloquial speech, and these systems which have collectively thousands of very, very smart people working on them can still be thrown for a loop.

Meanwhile, theres Samsung. A spokesperson for the company, speaking to the Korea Herald, says, Developing Bixby in other languages is taking more time than we expected mainly because of the lack of the accumulation of big data. Google, Amazon, and Apple all have vast libraries of speech to fall back on, and Google in particular has its search engine to simulate the appearance of real depth (even if it can be badly led astray).

None of this is to bag on Samsung. The company is the second-largest manufacturer of cell phones in the world, and its Galaxy smartphones were briefly outselling the iPhone in 2016. Its also an enormous company, of which cell phones are but one of its many going concerns. (Nobody expects Google to turn out washing machines, or Apple to make a vacuum cleaner.) But the table stakes in the world of voice recognition and AI agents are so tremendously high, its hard to see how any company even one as large as Samsung will be able to break through.

Not that thats deterring Samsung. Its reportedly already planning to bring its own Echo competitor to market, code-named the Vega. Its easy to see this as Samsungs reach exceeding its grasp why bring a product to market when you cant even get your phones to understand English? but theres a good reason why Samsung may be forging ahead. Even if it cant rack up the sales numbers the Echo has seen, itll at least get a few more people talking to Samsung and helping it build up its own store of voice data to train against.

Uber tries to play nice.

According to a new report, 20 women spoke about their experiences with sexual harassment at Tesla during a town-hall meeting.

Agata Kornhauser-Duda avoided shaking the presidents hand like a pro.

Wow! Sending and receiving video and photo messages that disappear? How original!

The cheapest bottle from Amazons new wine line retails for $20.

Case closed.

With great power comes great responsibility to make funny Vines.

Dont ask a Chipotle employee to package all your ingredients individually. Just dont do it.

Now you can use voice filters on everything.

Lost in translation.

NPR tweeted the document in honor of the Fourth of July, and the responses are gold.

There isnt really a middle ground between sober and objective news organization and righteous internet vigilante gang.

One of the top Apple analysts says Apples latest and greatest wont have any fingerprint scanner at all.

Having a like button on a website doesnt count as wiretapping.

Facebook is making a slight tweak using a simple metric.

Swarm Simulator is nothing but text, buttons, and watching big numbers get bigger. Its awesome.

Follow this link:

Samsung's Bixby and Why It's So Hard to Create a Voice AI - New York Magazine

Posted in Ai | Comments Off on Samsung’s Bixby and Why It’s So Hard to Create a Voice AI – New York Magazine

Is Artificial Intelligence A (Job) Killer? | HuffPost – HuffPost

Posted: at 2:13 am

Theres no shortage of dire warnings about the dangers of artificial intelligence these days.

Modern prophets, such as physicist Stephen Hawking and investor Elon Musk, foretell the imminent decline of humanity. With the advent of artificial general intelligence and self-designed intelligent programs, new and more intelligent AI will appear, rapidly creating ever smarter machines that will, eventually, surpass us.

When we reach this so-called AI singularity, our minds and bodies will be obsolete. Humans may merge with machines and continue to evolve as cyborgs.

Is this really what we have to look forward to?

AI, a scientific discipline rooted in computer science, mathematics, psychology, and neuroscience, aims to create machines that mimic human cognitive functions such as learning and problem-solving.

Since the 1950s, it has captured the publics imagination. But, historically speaking, AIs successes have often been followed by disappointments caused, in large part, by the inflated predictions of technological visionaries.

In the 1960s, one of the founders of the AI field, Herbert Simon, predicted that machines will be capable, within twenty years, of doing any work a man can do. (He said nothing about women.)

Marvin Minsky, a neural network pioneer, was more direct, within a generation, he said, the problem of creating artificial intelligence will substantially be solved.

But it turns out that Niels Bohr, the early 20th century Danish physicist, was right when he (reportedly) quipped that, Prediction is very difficult, especially about the future.

Today, AIs capabilities include speech recognition, superior performance at strategic games such as chess and Go, self-driving cars, and revealing patterns embedded in complex data.

These talents have hardly rendered humans irrelevant.

Reuters

But AI is advancing. The most recent AI euphoria was sparked in 2009 by much faster learning of deep neural networks.

Artificial intelligence consists of large collections of connected computational units called artificial neurons, loosely analogous to the neurons in our brains. To train this network to think, scientists provide it with many solved examples of a given problem.

Suppose we have a collection of medical-tissue images, each coupled with a diagnosis of cancer or no-cancer. We would pass each image through the network, asking the connected neurons to compute the probability of cancer.

We then compare the networks responses with the correct answers, adjusting connections between neurons with each failed match. We repeat the process, fine-tuning all along, until most responses match the correct answers.

Eventually, this neural network will be ready to do what a pathologist normally does: examine images of tissue to predict cancer.

This is not unlike how a child learns to play a musical instrument: she practices and repeats a tune until perfection. The knowledge is stored in the neural network, but it is not easy to explain the mechanics.

Networks with many layers of neurons (therefore the name deep neural networks) only became practical when researchers started using many parallel processors on graphical chips for their training.

Another condition for the success of deep learning is the large sets of solved examples. Mining the internet, social networks and Wikipedia, researchers have created large collections of images and text, enabling machines to classify images, recognize speech, and translate language.

Already, deep neural networks are performing these tasks nearly as well as humans.

But their good performance is limited to certain tasks.

Scientists have seen no improvement in AIs understanding of what images and text actually mean. If we showed a Snoopy cartoon to a trained deep network, it could recognize the shapes and objects a dog here, a boy there but would not decipher its significance (or see the humor).

We also use neural networks to suggest better writing styles to children. Our tools suggest improvement in form, spelling, and grammar reasonably well, but are helpless when it comes to logical structure, reasoning, and the flow of ideas.

Current models do not even understand the simple compositions of 11-year-old schoolchildren.

AIs performance is also restricted by the amount of available data. In my own AI research, for example, I apply deep neural networks to medical diagnostics, which has sometimes resulted in slightly better diagnoses than in the past, but nothing dramatic.

In part, this is because we do not have large collections of patients data to feed the machine. But the data hospitals currently collect cannot capture the complex psychophysical interactions causing illnesses like coronary heart disease, migraines or cancer.

So, fear not, humans. Febrile predictions of AI singularity aside, were in no immediate danger of becoming irrelevant.

AIs capabilities drive science fiction novels and movies and fuel interesting philosophical debates, but we have yet to build a single self-improving program capable of general artificial intelligence, and theres no indication that intelligence could be infinite.

Deep neural networks will, however, indubitably automate many jobs. AI will take our jobs, jeopardising the existence of manual labourers, medical diagnosticians, and perhaps, someday, to my regret, computer science professors.

Robots are already conquering Wall Street. Research shows that artificial intelligence agents could lead some 230,000 finance jobs to disappear by 2025.

In the wrong hands, artificial intelligence can also cause serious danger. New computer viruses can detect undecided voters and bombard them with tailored news to swing elections.

Already, the United States, China, and Russia are investing in autonomous weapons using AI in drones, battle vehicles, and fighting robots, leading to a dangerous arms race.

Now thats something we should probably be nervous about.

Marko Robnik-ikonja, Associate Professor of Computer Science and Informatics, University of Ljubljana

This article was originally published on The Conversation. Read the original article.

The Morning Email

Wake up to the day's most important news.

See the article here:

Is Artificial Intelligence A (Job) Killer? | HuffPost - HuffPost

Posted in Artificial Intelligence | Comments Off on Is Artificial Intelligence A (Job) Killer? | HuffPost – HuffPost

A ‘Neurographer’ Puts the Art in Artificial Intelligence – WIRED

Posted: at 2:12 am

Claude Monet used brushes, Jackson Pollock liked a trowel, and Cartier-Bresson toted a Leica. Mario Klingemann makes art using artificial neural networks.

In the past few years this kind of softwareloosely inspired by ideas from neurosciencehas enabled computers to rival humans at identifying objects in photos. Klingemann, who has worked part-time as an artist in residence at Google Cultural Institute in Paris since early 2016, is a prominent member of a new school of artists who are turning this technology inside out. He builds art-generating software by feeding photos, video, and line drawings into code borrowed from the cutting edge of machine learning research. Klingemann curates what spews out into collections of hauntingly distorted faces and figures, and abstracts. You can follow his work on a compelling Twitter feed .

A photographer goes out into the world and frames good spots, I go inside these neural networks, which are like their own multidimensional worlds, and say Tell me how it looks at this coordinate, now how about over here? Klingemann says. With tongue in cheek, he describes himself as a neurographer.

Klingemanns one big project for Google so far is an interactive online installation launched in November that uses image recognition to find visual connections between any two images in a giant collection covering thousands of years of art historysay a roman sculpture and a Frida Kahlo self-portrait . While working in secret on a sequel to that project at Google, Klingemann has been exploring the potential of neurography in public on his own time. Many of his recent creations were made with a technique trendy among machine learning researchers called generative adversarial networks , which, given the right source material, can teach themselves to fabricate strikingly realistic digital images and audio files.

Some computer science researchers are using the method to fill in missing details in patchy radio telescope images. Others are using it to train systems to process health records without risking real patient data. [I dont quite understand. How would records be put at risk? Clarify?] Klingemann has harnessed it to generate images that combine the styles of 19th century portraits and 21st century selfies , and fabricating impressively realistic footage like this clip of 1960s French chanteuse Francoise Hardy.

Klingemann's work is in turn inspiring other artists. In a Barcelona show called My Artificial Muse earlier this month, artist Albert Barqu-Duran spent three days painting a fresco of an image Klingemanns software had generated from a stick figure modelled on John Everetts famous painting Ophelia .

All of which raises the perennial question: Is this art ? Klingemann says he and others using neural networks this way will have to gradually earn their place in the art world just as video and digital artists had to do over the last several decades. These new forms always have a hard time being accepted by the establishment, he says.

Jessi Hempel

Melinda Gates and Fei-Fei Li Want to Liberate AI from Guys With Hoodies

David Weinberger

Our Machines Now Have Knowledge Well Never Understand

Helen Greiner

Why Robots Are As Interesting As Humans

Right now Klingemann has to work hard to find the training data that will cause his neural networks to produce interesting results. Hes built himself a Tinder-style interface to quickly work through piles of newly generated neurographs and find the few that strike him as any good. I produce a thousand images and maybe two or three are great, 50 are promising, and the rest are just ugly or repetitive, he says.

As you may have noticed, the images he does select typically come with more than a little of the uncanny about them; Klingemann has lost count of the times hes been told the faces and figures his code generates are reminiscent of Francis Bacons famously grotesque and disturbing work.

The comparison is apt. Its also evidence of how far artificial neural networks are from really understanding images or artnot that computers have warped minds. Im doing creepy right now because I cant do non-creepy, I wish I could, Klingemann says. In two or three years, the creepiness will go away, which might make it more creepy because we wont be able to distinguish from a photo or painted artwork. The uncanniest AI artist of all might be the one whose raw output doesnt look artificial.

Originally posted here:

A 'Neurographer' Puts the Art in Artificial Intelligence - WIRED

Posted in Artificial Intelligence | Comments Off on A ‘Neurographer’ Puts the Art in Artificial Intelligence – WIRED