Monthly Archives: July 2017

How to Build a Career, Not Just Find a Job – Entrepreneur

Posted: July 7, 2017 at 2:14 am

Reader Resource

Apply now to be an Entrepreneur 360 company. Let us tell the world your success story. Get Started

Headlines abound whenever Facebook or Google introduce a new feature or product. Recently, both rolled out similar services for job seekers, but dont expect these tools to take all the work out of landing your dream job.

Heres what the two Silicon Valley giants are offering. Google will aggregate listings from five major job sites to display in search results. On Facebook, companies can post jobs and contact and track applicants. The social media site will also push relevant jobs into users news feeds.

Related: To Succeed You Must Make Yourself Indispensable

Both companies want to keep people on their websites longer and serve paying customers (i.e., advertisers and businesses). For the individual job seeker, these launches tout added convenience -- but to what purpose? Being able to blast out resumes to more companies from a single site may feel better quantitatively, but its potentially worse from a qualitative standpoint.

If you want to build your career and not just find a job, developing your professional network will be far more valuable than uploading your resume to every listing site on the internet.

Just do it: Put yourself out there, dont dismiss anyone as unhelpfuland be gracious to everyone you meet. You never know who may connect you to a great opportunity. Rather than view your network as a bunch of people you may eventually be able to use, approach it as a chance to meet interesting, diverse people who will expand your world and introduce you to new experiences, whether they be jobs or not. Dont limit yourself to the short-term goal of finding a job; invest in relationships that you can carry with you for years to come.

Related: How to Be Ready When Opportunity Knocks

Certainly, networking can be daunting when youre early in your career and dont have a lot to show for yourself. And especially if youre shy, it may be even harder to initiate conversations with people you barely know who are older and more experienced. The truth, however, is that many of us genuinely enjoy using our successes to help someone else who shows promise and ambition. I encourage my peers to become mentors all the time, so they can see how rewarding it is to get a youthful perspective and use their experience to further someone elses career.

LinkedIn is a great place to connect with potential mentors as well as people who might be looking to hire. You can also visit the pages of companies that interest you and find names of people in the department where youd like to work. But just like blindly sharing your resume wont guarantee results, you need to do more than send strangers invitations to connect online. Craft a personalized message to each person explaining your goals, why you consider this person a role model, and why you deserve a half-hour of their time.

Youre also going to have to approach people in the real world. Step outside your comfort zone, attend industry functions and meetups, and request informational interviews with people in roles to which you aspire. The worst that can happen is they say no, thanks or dont respond. Im in my colleges alumni database and have indicated Im open to hearing from recent grads seeking advice. Your school very likely has a similar network for finding established professionals in your target field.

Related: Get Maximum LinkedIn Leverage to Boost Your Career and Grow Your Business

Continuing education is another avenue for meetings others involved in your industry -- both teachers and fellow students. Ask where others have worked, how they found their jobsand whether theyd be willing to make introductions for you. Connect online to see who else they know.

And, while you dont want to turn every fun activity into a professional networking session, keep your eyes and ears open when youre socializing too. There might be someone in your book club, churchor spin class who knows someone at your dream company. As long as youre respectful and not overbearing, it cant hurt to let people know youre looking for career help.

Above all, remember you are asking people to give you something: their time, their advice, their support. Youre asking for a favor, so be gracious, patientand receptive, whether theyre in a position to offer you work or not.

Related: You Are Your Best Investment

Listen more than you talk. Be curious, open-mindedand flexible, rather than having a fixed agenda and set of expectations. If youve had a good first meeting but arent sure where to go from there, ask if you can continue to check in with them occasionally and seek their guidance when youre prepping for important interviews. See if theyll keep you in mind for an internship or even a freelance project.

Walking away from a networking meeting or informational interview without a promise is not a failure. Youre building relationships and your career, not job hunting. This is the beginning of a conversation that could last for years if it holds value for both of you.

Lisa Haugh has more than 15 years of experience leading legal and HR functions for a range of startups and mature companies. At Udemy, she heads up all legal and human resource functions, including all hiring, training and diversity efforts...

See the article here:

How to Build a Career, Not Just Find a Job - Entrepreneur

Posted in Mind Uploading | Comments Off on How to Build a Career, Not Just Find a Job – Entrepreneur

Volkswagen Debuts Virtual Reality App for Training, Collaboration … – Fortune

Posted: at 2:13 am

Volkswagen is buckling down on virtual reality technology.

The German auto giant said Wednesday that it built a virtual reality app that acts as a sort of digital meeting room where team members can interact with one another and discuss auto designs, among other things.

The new VR app contains all of the companys previous VR apps like its virtual reality car showrooms into one hub. Employees who work across Volkswagons various brands like Audi and Skoda Auto will be able to access the app and work together on different projects.

Get Data Sheet , Fortunes technology newsletter.

Going forward, we can be virtual participants in workshops taking place at other sites or we can access virtual support from experts at another brand if we are working on an optimization, said Volkswagen ( vlkay ) group logistics member Mathias Synowski in a statement. That will make our daily teamwork much easier and save a great deal of time."

Volkswagen said that it would be using a version of the HTC Vive virtual reality headset for businesses as part of its rollout of the new app.

HTC debuted its HTC Vive Business Edition last June. The headset costs $1,200 and comes with a 12-month warranty, customer support, and other features intended to make it more attractive to business clients.

See original here:

Volkswagen Debuts Virtual Reality App for Training, Collaboration ... - Fortune

Posted in Virtual Reality | Comments Off on Volkswagen Debuts Virtual Reality App for Training, Collaboration … – Fortune

Newton couple creating virtual reality software in basement – Washington Times

Posted: at 2:13 am

NEWTON, Kan. (AP) - A Newton couple is bringing Silicon Valley to their Kansas-based lab - better known as their basement.

Corey and Michele Janssens, founders of ViewVerge, are enhancing the way people see media through a 2D to 3D converter and a 3D to 3D enhancer for augmented and virtual reality (ARVR), The Wichita Eagle (http://bit.ly/2sNxVyt ) reported.

Our goal was to basically re-create a biological version of 3D - a more natural 3D - because of ARVR, Corey Janssens said. We perceive in 3D, so it just seemed kind of natural: Why have a 3D device and watch 2D content?

The couple has struggled to attract investors who want to invest outside of Silicon Valley, but said they have no plans to leave the state.

What were doing is a Silicon Valley venture in Kansas, Michele Janssens said. I knew that would be a challenge, and it is just as big a challenge as we thought it would be.

But there are good things happening in Kansas. And everyone tells us there is a push right now to venture more into tech and bring jobs and money to the Wichita area.

While the Jansssenses have sought and attracted mentors nationwide in 3D technology, marketing and branding, they said success will occur when they have licensing and investors to help make ViewVerge technology readily available through mobile applications, or for 2D to 3D conversion in the medical and military fields.

Corey Janssens, a former Army unmanned aerial vehicle pilot and self-taught theoretical physicist and engineer, and Michele Janssens, a speech therapist, have what they call a marriage of science and communication.

An interesting fact that is a very integral part of who we are as a couple and hopefully as a vital company: Corey is autistic, I am a speech therapist, and were married, Michele Janssens said. He is passionate about building things and physics and the science, and I am passionate about communication.

Its really kind of a unique marriage.

Corey Janssens said he has had many jobs in his life that led him to developing this software.

It was when he spent five years as part of and then leading a confidential Microsoft think-tank that Bill Gates called him a modern-day Isaac Newton, according to a ViewVerge media release.

That interaction and exposure led him to apply to get one of the first rounds of developer HoloLens they released, Michele Janssens said. We waited about 10 years to do something like this.

The couple received their Microsoft Hololens - the first self-contained holographic computer - in May 2016.

When we got that Hololens, he knew this was it, Michele Janssens said.

It took just three to four months for Corey Janssens to develop the foundation for the software, and after continual improvements they think they have the answer to natural, human-like 3D media.

I dont believe youre going to have much 2D media in the future, he said. It just makes more sense to have graphics that are put in the format of the way we naturally see things.

If you build a system that is converting 2D to 3D, in a sense that is what the human brain does. We dont actually see 3D, you infer distance from having two eyes.

So by mimicking the biological system well enough with some added algorithms, you have an early computer vision system that is much more human.

The 3D software currently available has been gimmicky, Michele Janssens said, and that is not their goal.

When (people) hear 3D, they think stuff popping out in the face, and thats not actually what 3D is, Michelle Janssens said.

Our goals are to make it natural and comfortable, just like when youre looking around.

___

Information from: The Wichita (Kan.) Eagle, http://www.kansas.com

Original post:

Newton couple creating virtual reality software in basement - Washington Times

Posted in Virtual Reality | Comments Off on Newton couple creating virtual reality software in basement – Washington Times

EPL 2030: Sergio Aguero in Your Lounge – Future of Football and Virtual Reality – Bleacher Report

Posted: at 2:13 am

Kirsty Wigglesworth/Associated Press

From sipping champagne in a virtual luxury box at the Camp Nou to sitting pitchside at Old Trafford from a hotel in Melbourne, Australia, the way we consume football is beingreimagined by broadcasters and technology companies.

As clubs look for new ways to build and engage their audiences, bold technical thinkers are plotting a virtual-reality revolution. Forget 3D television, which failed to take off and was hugely expensive; VR is the next frontier of football entertainment. Some have already arrived.

In August last year, Bayern Munich's opening Bundesliga game of the season against Werder Bremen was shown live in VR, the first time such an experiment had taken place, while Fox Sports used virtual reality images To enhance their broadcast of an Eredivisie match between PSV Eindhoven and Feyenoord in February of this year.

This is next-generation VR we're talking about. From the Oculus Rift to the Google Daydream, Samsung Gear or HTC Vive, new technologies are poised to transform football viewing as you know it.

But will virtual reality live up to its hype, or are those staking millions on it destined for an expensive reality check?

Miheer Walavalkar sits quietly in a Soho coffee shop. He takes hissmartphone and slides it into a basic VR headset. While the rest of the world sips on lattes and flat whites, I am ushered into the world of virtual reality.

Walavalkar, born in India but residing in the U.S., is one of the brains behind LiveLike, whose introduction into the marketplace was one of the stories of sports VR last year. The company has raised $5 million in funding thanks to former NBA commissioner David Stern and a group of venture firms led by Evolution Media Partners and Elysian Park Venturescreated by the owners of the Los Angeles Dodgers baseball team.

LiveLike are powering a new app for Fox Sports, called Fox Sports VR, which has already been busy impressing customers, notably showing a college football game between Oklahoma and Ohio State in virtual reality. They also have a partnership to create VR content with Manchester Cityand this week the company will team up with Fox to show the CONCACAF Gold Cup in VR. Bruce Arena's United States versus Panama on July 8 in virtual reality in LiveLike's "virtual suite".

The company has tested its VR capabilities with the Premier League and at the biggest club game in the world, El Clasico, which was watched in VR by an estimated 37,000 peopleon less than a week's notice behind a paywall.

Headset on, I'm reclining on a virtual sofa with a Premier League game happening in front of me. Sergio Aguero is nearly on my lap and David Silva is just further afield. Tilting the head one way changes the camera angle and allows the viewer to move behind the goal. Tilting it the other way allows them to watch from the stands.

There's a stats table you can flick through with just the smallest head movement, giving you the latest possession numbers, passing percentages and everything you could want as a football fan.Replays are freely availableyou just need to move your head slightly to initiate them. You can watch the same incident from three or four different angles.

It's intuitive, easy to use and, after the first five minutes, it's easy to forget that you are sat in a busy London coffee shop with a headset on.Occasionally, when the ball is swept away to the far side of the pitch, it is difficult to see the action, but a quick glance up a the large virtual screen keeps you abreast of what is happening.

For Walavalkar, who grew up watching the Premier League from India, the opportunity to share the beautiful game with millions back home remains a huge driving force.

"For us, live sports is the mecca," Walavalkar tells Bleacher Report."We have done a few live events that have gone really well and got good feedback. It's all about the user experience and social featuresthe ability to teleport the user into an experience.

"We've been able to integrate statistics and replays, while making it a much more social experience. We want fans to feel like they're right in the heart of what's happening."

For LiveLike, the U.S. and Asia are two of the biggest target markets. As noted, the app has already won deals with Fox and Manchester City, but competition is fierce. Anyone who stands still for more than a minute faces being left behind.

Let's take a look at the VR sport market as it affects the major players.

The TV Companies

Few know the potential for success in weaving together television and VR like Fox Sports' Mike Davies, senior vice president of field and technical operations.Davies is the go-to man when it comes to combining live broadcast and the virtual-reality aspect of the channel's coverageand it's paying off handsomely.

Showing Bayern Munich's league opener in VR was just the start for football. "We worked really closely with theBundesliga on this, and they were great partners," Davies tells Bleacher Report. "We tried two big different things, as NextVR has a lot of experience with live soccer.

"One of things we took was to add specialty commentators to the VR broadcast so we didn't take commentary from the linear broadcast. That helped the viewer feel like they had someone co-piloting with them in the experience.The other thing we tried was showing replays at half-time in VR. I think that one of the big things we've been looking at with LiveLikeis having the ability to go back and re-experience instances in VR."

Outside of football, Fox has trialled VR at the U.S. Open golf tournament, the French Open tennis tournament, Daytona 500 and a number of other events, including Monster Trucks.

"I've been playing around with VR for the past few years and actively involved in public-facing events," Davies said."It has been a very quick evolution. With the advent of products that make live VR possible, utilising cell phones, Google cardboard, it has been very quickly attainable technology in terms of being something everyone can consume, at least in theory."

The Bayern Munich VR broadcast went down well with fans and organisers alike, but the nature of football and its suitability for such coverage did raise some questions.

The length of the field was a challenge, with NextVR having to employ more cameras to cover the area. There were also resolution problems to consider, particularly when the ball was on the far side of the pitch.

"When the play was happening close to you, it was dynamiteit was like they were on your lap," Davies notes. "But when it was somewhat further away, because of the resolutions of the phone, it was very difficult to see the ball. I think that large playing field will require more resolution for people to see that."

Davies says he's keen on weaving in augmented reality elementshighlighting the ball or tracking player movements: "The way we're working around that is with additional cameras, tracking data and augmented reality to help you feel like you are part of the game.

"We can also integrate the linear broadcast into the VR with a Jumbotron,so if there is something that is particularly hard to see, then you can look at the screenjust as you would in the stadium."

The Clubs

For clubs like Bayern, with one of the largest and most engaged fanbases in football, the move into VR was a no-brainer. Stefan Mennerich, who heads up Bayern's digital media department, has been working on VR and 360-degree coverage with big success.

Mennerich sees VR as another avenue to bring fans together, particularly those who cannot get to matches at the Allianz Arena or live thousands of miles away in the U.S. or China, the two big target markets for the club.

In 2015, Mennerich began to see the benefits of VR after spending time at Facebook HQ and sampling the Oculus Rift, which is one of the market leaders in headsets.

"I thought that we would have to offer something like this because football lives off the possibility of fans taking part, and so I thought we have to do it,"Mennerich tells Bleacher Report. "I think VR is a very good way to let the fans take part in the event and emotions.

"I spoke to [NBA teams] the Orlando Magic and Golden State Warriors, and what they are doing is very forward-thinking, and we want to establish the same experience for our fans. But I can't say what financial effect it will have in the future. It is the same as with social media was in the beginning. You do it because it's fun, it has good content and the aim is to reach the fans."

Mennerich says the Bayern VR broadcast received encouraging feedback, though they may have been aided somewhat by the fact Bayern cantered to a 6-0 victory. While he remains cautious over the long-term viability of VR, he is optimistic for now.

"I think the first thing we have to do before making a decision is to wait until there is a big-enough audience to enjoy the content because not everyone has VR glasses or headsets," he says. "After that, once we bring in good content, we have to think about how we can monetize it."

The Experts

If Bayern need advice on monetizing VR, Brad Allen would be a good man to consult.

Spend five minutes talking to Allen, executive chairman of NextVR, and his passion for VR and sport is obvious. NextVRis one of the major players when it comes to live VR broadcastthere aren'tmany sports it hasn't shown in VR; they produced a highlights package on each game of the recent NBA Finals between Cleveland Cavaliers and Golden State Warriors, while those without headsets could view highlights via the company's app. For Allen, this is one of the most exciting times in the business.

He believes VR will provide the perfect complementary form of coverage to live television and that the experience can be a highly engaged, social one.

"What you might see in the future is the chance to build your own luxury box outfitted with your team's gear, and you could invite all your friends," he says. "You look over there, and there are all your friends, avatars of them, or maybe they want to look like someone else, but you are all sitting there in the luxury box even though you are all sitting at home in your VR glasses."

Allen sees an e-commerce aspect coming into play, with the virtual suite offering up a chance to buy team merchandise, such as shirts and personalisation options. He accepts the world has changed dramatically in recent years and believes VR is a great way to bring new fans into the game.

"You have an aging population in some places and where maybe the millennials don't care as much because the amount of entertainment on offer is unprecedented, especially with esports and video games," he says."How do you bring those fans along?

"You can do it with new technology that is unique and different and appealing to them. That's why everyone is interested in this and connecting directly with their fans to give them an experience like nothing before."

Allen likens the scene to that of when cell phones first entered the market with the big, brick-like models. The emergence of Google's Daydream, the Samsung Gear and Facebook's constant investment means the importance of VR is not being lost on anybody. The technology is only going to get better.

"Goggles will move to glassesLG has already brought out its first version," Allen says. "I think they weigh 120 grams. They are tethered to your mobile device, but soon that will be Bluetooth, and all the power will be through the mobile device, and eventually the glasses will turn into something like Oakley wrap around glasses.

"They will have little ear buds that will come down so you get your audio, and eventually we'll have contact lenses. That will be bizarre because you are not going to know whether that person is watching something or talking to you.We've got the biggest companies in the world when you consider Google'sDaydream. They're probably going to be the biggest mobile winner in the space."

Allen talks of "a hundred companies in China" that are making headsets and believes it is only a matter of time before Apple enters the VR space.

But for all the technology and millions being spent, can VR ever compete with the real thingbeing at the game? How can it match the noise, the smells, the anticipation, the palpitations and the authentic matchday experience?

Can VR truly generate the stadium buzz so many football fans live for?

Perhaps not. But for those who live thousands of miles away from the stadium, it could be the ticket they have been waiting forbeing able to watch Manchester United from Macau, Bayern from Brisbane or Tottenham from Tahiti.

"I don't think anything can beat being there in person just because of the energy," Allen says."Youre high-fiving somebody next to you whom you didn'teven know because the team you're both fans of scored a goal. It really won't ever replace that.But what do you do about the 300 million fans who will never be able to get to the stadium? This is the closest thing they'll ever get to being there.

"We have a big strategy around Asia and China in particular; they are huge sports fans over there. People wake up at 3 a.m. to watch Premier League games because they are passionate fans like we all are."

Allen says a combination of geography and the difficulty of getting tickets to major games has driven demand for a more immersive TV viewing experience. "This is their answer. It's the virtual ticket to being there."

*All quotes and information obtained firsthand unless otherwise indicated.

Originally posted here:

EPL 2030: Sergio Aguero in Your Lounge - Future of Football and Virtual Reality - Bleacher Report

Posted in Virtual Reality | Comments Off on EPL 2030: Sergio Aguero in Your Lounge – Future of Football and Virtual Reality – Bleacher Report

Looking for Westworld? Head east – Quartz

Posted: at 2:13 am

Imagine living in a city where every inch of public space is a portal into a different world. Instead of a local park, you have a role-playing arena where citizens dress up as survivalists on the hunt for island boar. The town hall doubles as an e-sports gaming arena where people take video-game classes instead of summer school. A gamer would die happy in the real world if they could wake up here in this virtual one.

But you dont have to imagine: Its called Taihu Mermaid Small Town.

Located on the outskirts of Shanghai, local governors in the Jiangsu province of WuXi are planning to build a literal virtual reality Westworld . Taihu will have five live-action role-play zones, a 48,000 square meter (517,000 square foot) stage area, a 71,200 sq m commercial plaza, and a digital-industry park for engineers, scientists, and R&D labs. Two more towns, Dong Hu and Beido Bay VR Village, have started similar projects, offering entrepreneurs incentives like rent-free offices, apartments, and startup capital. Taihu will cost upward of $20 billion yuan (USD$3 billion), and is part of a broader trend to take development outside of the already vibrant economic zones of Shenzhen and Shanghai and spread it further west.

In this way, China is future-proofing the country by dedicating entire towns to different emerging technologiesa move thats part marketing, part politics. New technologies such as artificial intelligence and virtual reality are developing by leaps and bounds said president Xi Jinping his 2016 B20 Summit keynote, and will be key to developing an innovative world society. Keeping to his word, Xi has increased funding opportunities in these areas, even surpassing the United States on funding AI research. If China can successfully corner the market on defining technologies of our time, it can get a leg up on the rest of the world.

That sprint has already begun. Facebooks $3 billion acquisition of Oculus VR in 2014 set off a virtual international space race, with the US and China taking the early lead. Chinas new VR towns signal their commitment to charging ahead, but the question is if they pull it off. For example, the technology needed for the arena-sized location-based gaming they promise at Taihu is not ready yet. And as one major hardware change can lead to an entire shift in the industry, its difficult to commit to multimillion-dollar infrastructure projects.

Its not just money they need to make it work: Its people, too. Large projects like these need storytelling soft skills and a cocktail of interdisciplinary talent to brainstorm what these towns would look like. A fully-functioning city will need an army of artists, researchers, designers, architects, writers, and a host of other specialties that probably havent been invented yet. Disney imagineers alone come from 140 different disciplines. Leading new location-based gaming companies like THE VOID, Spaces, and Nomadic have DreamWorks, Pixar, Google, and Industrial Light & Magic executives helming them.

And then theres the hardware. Although some of these VR towns have already launched, none have officially partnered with any leading headset manufacturers; HTC Vive is focusing on broader national objectives while the Facebook-owned Oculus Rift is banned in China. According to the Chinas president of HTC Vive, Alvin Graylin Wang, HTC has partnered with Chinas National Tourism Board to promote VR in China, but have no connections to these individual city-level projects. Thats because Wang is skeptical they will work: The people who are involved in it are not necessarily VR experts and are using it to sell more real estate or get more business interest, Wang says. But if you havent thought about how it flows into your daily lives, then it is probably not going to solve the issues.

The reality of these towns is currently far removed from what they promise. Right now, most of these towns are just empty rooms with headsets sprinkled around. Its a lot more buzz than it is real right now, he says. Trying to make every part of your life dedicated to VR technology is, again, a little too early. Maybe in 10 years or so it will make a little more sense.

Chinas VR cities arent the first industry-specific towns of their kind. Similar projects have been conducted with drone cities, and they are also shifting further and further into high-tech research and development with Lingang New City, a $5.6 billion, 133 sq km satellite city near Shanghai.

Wade Shepard, author of Ghost Cities of China, has been researching Chinas development models for the past decade. He has noticed a new pattern where the government invests in basic infrastructure then invites in niche markets that specialize in developing one kind of industry. A lot of these are the local governments pet projects, and they want them to get attention, so they build them to be different, to be extreme, Shepard says.

This often means that local governments have to promise a lot up front to get the ball rolling, and then hope they attract the right people along the way. For example, this model was used to develop the Chinese Medical City, 30 sq km north of the Yangtze River between Shanghai and Nanjing. The area was considered a backwater in 2005, but thanks to policies that allow CMC-based pharmaceutical companies to leapfrog multiple bureaucratic levels, they were able to get their drugs directly in front of the CFDA, Chinas drug regulation body.

State-level projects are not really allowed to fail, Shepard says. These new areas kind of become self-fulfilling prophecies. Developers and investors know that the projects will be successful because the central government wont allow them to fail, so they invest and ultimately make them successful. Ten years later, this ghost city is slowly filling up with business.

Its still too early to know if Taihu Mermaid Small Town will gain the traction it needs to survive. But if they can introduce policies that attract and retain technical and creative talent, China can strengthen its foothold over an increasingly virtual world.

They have a master plan, Shepard says. Whether it works or not is kind of a big question.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

See the rest here:

Looking for Westworld? Head east - Quartz

Posted in Virtual Reality | Comments Off on Looking for Westworld? Head east – Quartz

EICC wins $780000 grant for ag, water virtual reality – Quad-Cities Online

Posted: at 2:13 am

DAVENPORT Eastern Iowa Community Colleges has officially received notice it will receive a $748,218 grant from the National Science Foundation through June 2020.

Administered by EICC's Advanced Technology Environmental and Energy Center, the money is for a project titled "Water Intense: Interactive Technology Education."

"The main focus of the grant will be developing a virtual reality education curriculum for water, wastewater and agriculture technologies and conservation," said Ellen Kabat Lensch, EICC vice chancellor for workforce and economic development. "Once completed, we will share that curriculum with two-year colleges across the nation."

This is not the first time EICC has received grants for similar work. It recently completed an extensive curriculum development project in the advanced manufacturing field.

"Through our ATEEC program we have been developing curriculum for both colleges and high schools, in many different subject areas, for at least two decades," said Kabat Lensch. "Were very proud to have the National Science Foundation and others look to us for this work."

In this project, EICC will be working with its partners at the virtual reality company, EON Reality. EON is an international leader in virtual and augmented reality with global presence in the U.S., Sweden, Singapore and England.

The college began offering a virtual reality training academy for students last year. The 11-month program provides the training students need to begin careers developing virtual reality training tools for manufacturing, health and other industries.

This project specifically focuses on water and wastewater technician jobs that are growing faster than average. It comes amid concerns about source water availability, aging infrastructure, water quality and workforce issues.

Technology training in the water/wastewater and agriculture areas with the required equipment is often prohibitively expensive, time consuming and constrained by safety concerns. That often makes it impossible for colleges to provide students with access to equipment with which to experience hands-on training.

Additionally, educators in these fields often lack instructional methods that allow for hands-on training, and even when it is available, it is cost-prohibitive.

The EICC project is designed to help make training more affordable by creating a curriculum for virtual reality-based training. With the proper equipment, students can practice the essential hands-on skills they need repeatedly without having to turn to potentially expensive, and sometimes hazardous, options out in the field.

Over time, as technologies change, the virtual reality programs can be adapted.

For more details, visit eicc.edu/ateec or eicc.edu/eon.

See the article here:

EICC wins $780000 grant for ag, water virtual reality - Quad-Cities Online

Posted in Virtual Reality | Comments Off on EICC wins $780000 grant for ag, water virtual reality – Quad-Cities Online

This startup is building AI to bet on soccer games – The Verge – The Verge

Posted: at 2:13 am

Listen to Andreas Koukorinis, founder of UK sports betting company Stratagem, and youd be forgiven for thinking that soccer games are some of the most predictable events on Earth. Theyre short duration, repeatable, with fixed rules, Koukorinis tells The Verge. So if you observe 100,000 games, there are patterns there you can take out.

The mission of Koukorinis company is simple: find these patterns and make money off them. Stratagem does this either by selling the data it collects to professional gamblers and bookmakers, or by keeping it and making its own wagers. To fund these wagers, the firm is raising money for a 25 million ($32 million) sports betting fund that its positioning as an investment alternative to traditional hedge funds. In other words, Stratagem hopes rich people will give Stratagem their money. The company will gamble with it using its proprietary data, and, if all goes to plan, everyone ends up just that little bit richer.

Its a familiar story, but Stratagem is adding a little something extra to sweeten the pot: artificial intelligence.

At the moment, the company uses teams of human analysts spread out around the globe to report back on the various sporting leagues it bets on. This information is combined with detailed data about the odds available from various bookmakers to give Stratagem an edge over the average punter. But, in the future, it wants computers to do the analysis for it. It already uses machine learning to analyze some of its data (working out the best time to place a bet, for example), but its also developing AI tools that can analyze sporting events in real time, drawing out data that will help predict which team will win.

Stratagem is using deep neural networks to achieve this task the same technology thats enchanted Silicon Valleys biggest firms. Its a good fit, since this is a tool thats well-suited for analyzing vast pots of data. As Koukorinis points out, when analyzing sports, theres a hell of a lot data to learn from. The companys software is currently absorbing thousands of hours of sporting fixtures to teach it patterns of failure and success, and the end goal is to create an AI that can watch a range of a half-dozen different sporting events simultaneously on live TV, extracting insights as it does.

Stratagems AI identifies players to make a 2D map of the game

At the moment, though, Strategem is starting small. Its focusing on just a few sports (soccer, basketball, and tennis) and a few metrics (like goal chances in soccer). At the companys London offices, home to around 30 employees including ex-bankers and programmers, were shown the fledgling neural nets for soccer games in action. On-screen, the output is similar to what you might see from the live feed of a self-driving car. But instead of the computer highlighting stop signs and pedestrians as it scans the road ahead, its drawing a box around Zlatan Ibrahimovi as he charges at the goal, dragging defenders in his wake.

Stratagems AI makes its calculations watching a standard, broadcast feed of the match. (Pro: its readily accessible. Con: it has to learn not to analyze the replays.) It tracks the ball and the players, identifying which team theyre on based on the color of their kits. The lines of the pitch are also highlighted, and all this data is transformed into a 2D map of the whole game. From this viewpoint, the software studies matches like an armchair general: it identifies what it thinks are goal-scoring chances, or the moments where the configuration of players looks right for someone to take a shot and score.

Football is such a low-scoring game that you need to focus on these sorts of metrics to make predictions, says Koukorinis. If theres a short on target from 30 yards with 11 people in front of the striker and that ends in a goal, yes, it looks spectacular on TV, but its not exciting for us. Because if you repeat it 100 times the outcomes wont be the same. But if you have Lionel Messi running down the pitch and hes one-on-one with the goalie, the conversion rate on that is 80 percent. We look at what created that situation. We try to take the randomness out, and look at how good the teams are at what theyre trying to do, which is generate goal-scoring opportunities.

Whether or not counting goal-scoring opportunities is the best way to rank teams is difficult to say. Stratagem says its a metric thats popular with professional gamblers, but they and the company weigh it with a lot of other factors before deciding how to bet. Stratagem also notes that the opportunities identified by its AI dont consistently line up with those spotted by humans. Right now, the computer gets it correct about 50 percent of the time. Despite this, the company say its current betting models (which it develops for soccer, but also basketball and tennis) are right more than enough times for it to make a steady return, though they wont share precise figures.

A team of 65 analysts collect data around the world

At the moment, Stratagem generates most of its data about goal-scoring opportunities and other metrics the old-fashioned way: using a team of 65 human analysts who write detailed match reports. The companys AI would automate some of this process and speed it up significantly. (Each match report takes about three hours to write.) Some forms of data-gathering would still rely on humans, however.

A key task for the companys agents is finding out a teams starting lineup before its formally announced. (This is a major driver of pre-game betting odds, says Koukorinis, and knowing in advance helps you beat the market.) Acquiring this sort of information isnt easy. It means finding sources at a club, building up a relationship, and knowing the right people to call on match day. Chatbots just arent up to the job yet.

Machine vision, though, is really just one element of Stratagems AI business plan. It already applies machine learning to more mundane facets of betting like working out the best time to place a bet in any particular market. In this regard, what the company is doing is no different from many other hedge funds, which for decades have been using machine learning to come up with new ways to trade. Most funds blend human analysis with computer expertise, but at least one is run completely by decisions generated by artificial intelligence.

However, simply adding more computers to the mix isnt always a recipe for success. Theres data showing that if you want to make the most out of your money, its better to just invest in the top-performing stocks of the S&P 500, rather than sign up for an AI hedge fund. Thats not the best sign that Stratagems sports-betting fund will offer good returns, especially when such funds are already controversial.

In 2012, a sports-betting fund set up by UK firm Centaur Holdings, collapsed just two years after it launched. It lost $2.5 million after promising investors returns of 15 to 20 percent. To critics, operations like this are just borrowing the trappings of traditional funds to make gambling look more like investing.

I dont doubt its great fun... but dont qualify it with the term investment.

David Stevenson, director of finance research company AltFi, told The Verge that theres nothing essentially wrong with these funds, but they need to be thought of as their own category. I dont particularly doubt its great fun [to invest in one] if you like sports and a bit of betting, said Stevenson. But dont qualify it with the term investment, because investment, by its nature, has to be something you can predict over the long run.

Stevenson also notes that AI hedge funds that are successful those that torture the math within an inch of its life to eek out small but predictable profits tend not to seek outside investment at all. They prefer keeping the money to themselves. I treat most things that combine the acronym AI and the word investing with an enormous dessert spoon of salt, he said.

Whether or not Stratagems AI can deliver insights that make sporting events as predictable as the tides remains to be seen, but the companys investment in artificial intelligence does have other uses. For starters, it can attract investors and customers looking for an edge in the world of gambling. It can also automate work thats currently done by the companys human employees and make it cheaper. As with other businesses that are using AI, its these smaller gains that might prove to be most reliable. After all, small, reliable gains make for a good investment.

More here:

This startup is building AI to bet on soccer games - The Verge - The Verge

Posted in Ai | Comments Off on This startup is building AI to bet on soccer games – The Verge – The Verge

How AI detectives are cracking open the black box of deep learning – Science Magazine

Posted: at 2:13 am

By Paul VoosenJul. 6, 2017 , 2:00 PM

Jason Yosinski sits in a small glass box at Ubers San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinskis program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: Its a black box.

This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AIs individual computational nodesthe neurons, so to speakto see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. This responds to your face and my face, he says. It responds to different size faces, different color faces.

No one trained this network to identify faces. Humans werent labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinskis probe had illuminated one small part of it, but overall, it remained opaque. We build amazing models, he says. But we dont quite understand them. And every year, this gap is going to get a bit larger.

Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?

That interpretability problem, as its known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it AI neuroscience.

Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.

GRAPHIC: G. GRULLN/SCIENCE

Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AIbe they text, images, or anything elsein clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiros program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the wordsor parts of an image or molecular structure, or any other kind of datamost important in the AIs original judgment. The tests might reveal that the word horrible was vital to a panning or that Daniel Day Lewis led to a positive review. But although LIME can diagnose those singular examples, that result says little about the networks overall insight.

New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesnt require testing the network a thousand times over: a boon if youre trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank referencea black image or a zeroed-out array in place of textand transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.

Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting inoutfitted with the standard medley of mugs, tables, chairs, and computersas a Google conference room. I can give a zillion reasons. But say you slowly dim the lights. When the lights become very dim, only the biggest reasons stand out. Those transitions from a blank reference allow Sundararajan to capture more of the networks decisions than Ribeiros variations do. But deeper, unanswered questions are always there, Sundararajan saysa state of mind familiar to him as a parent. I have a 4-year-old who continually reminds me of the infinite regress of Why?

The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create explanations for their models internal logic. The Defense Advanced Research Projects Agency, the U.S. militarys blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasnt the only thing on their minds, she says. Im not sure what its doing, they told her. Im not sure I can trust it.

Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. Fear of a neural net is completely justified, he says. What really terrifies me is what else did the neural net learn thats equally wrong?

Todays neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of datasay, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections fire in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.

At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learnssomehowto make fine distinctions among breeds. Using modern horsepower and chutzpah, you can get these things to really sing, Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.

Gupta has a different tactic for coping with black boxes: She avoids them. Several years ago Gupta, who moonlights as a designer of intricate physical puzzles, began a project called GlassBox. Her goal is to tame neural networks by engineering predictability into them. Her guiding principle is monotonicitya relationship between variables in which, all else being equal, increasing one variable directly increases another, as with the square footage of a house and its price.

Gupta embeds those monotonic relationships in sprawling databases called interpolated lookup tables. In essence, theyre like the tables in the back of a high school trigonometry textbook where youd look up the sine of 0.5. But rather than dozens of entries across one dimension, her tables have millions across multiple dimensions. She wires those tables into neural networks, effectively adding an extra, predictable layer of computationbaked-in knowledge that she says will ultimately make the network more controllable.

Caruana, meanwhile, has kept his pneumonia lesson in mind. To develop a model that would match deep learning in accuracy but avoid its opacity, he turned to a community that hasnt always gotten along with machine learning and its loosey-goosey ways: statisticians.

In the 1980s, statisticians pioneered a technique called a generalized additive model (GAM). It built on linear regression, a way to find a linear trend in a set of data. But GAMs can also handle trickier relationships by finding multiple operations that together can massage data to fit on a regression line: squaring a set of numbers while taking the logarithm for another group of variables, for example. Caruana has supercharged the process, using machine learning to discover those operationswhich can then be used as a powerful pattern-detecting model. To our great surprise, on many problems, this is very accurate, he says. And crucially, each operations influence on the underlying data is transparent.

Caruanas GAMs are not as good as AIs at handling certain types of messy data, such as images or sounds, on which some neural nets thrive. But for any data that would fit in the rows and columns of a spreadsheet, such as hospital records, the model can work well. For example, Caruana returned to his original pneumonia records. Reanalyzing them with one of his GAMs, he could see why the AI would have learned the wrong lesson from the admission data. Hospitals routinely put asthmatics with pneumonia in intensive care, improving their outcomes. Seeing only their rapid improvement, the AI would have recommended the patients be sent home. (It would have made the same optimistic error for pneumonia patients who also had chest pain and heart disease.)

Caruana has started touting the GAM approach to California hospitals, including Childrens Hospital Los Angeles, where about a dozen doctors reviewed his models results. They spent much of that meeting discussing what it told them about pneumonia admissions, immediately understanding its decisions. You dont know much about health care, one doctor said, but your model really does.

Sometimes, you have to embrace the darkness. That's the theory of researchers pursuing a third route toward interpretability. Instead of probing neural nets, or avoiding them, they say, the way to explain deep learning is simply to do more deep learning.

If we can't ask why they do something and get a reasonable response back, people will just put it back on the shelf.

Like many AI coders, Mark Riedl, director of the Entertainment Intelligence Lab at the Georgia Institute of Technology in Atlanta, turns to 1980s video games to test his creations. One of his favorites is Frogger, in which the player navigates the eponymous amphibian through lanes of car traffic to an awaiting pond. Training a neural network to play expert Frogger is easy enough, but explaining what the AI is doing is even harder than usual.

Instead of probing that network, Riedl asked human subjects to play the game and to describe their tactics aloud in real time. Riedl recorded those comments alongside the frogs context in the games code: Oh, theres a car coming for me; I need to jump forward. Armed with those two languagesthe players and the codeRiedl trained a second neural net to translate between the two, from code to English. He then wired that translation network into his original game-playing network, producing an overall AI that would say, as it waited in a lane, Im waiting for a hole to open up before I move. The AI could even sound frustrated when pinned on the side of the screen, cursing and complaining, Jeez, this is hard.

Riedl calls his approach rationalization, which he designed to help everyday users understand the robots that will soon be helping around the house and driving our cars. If we cant ask a question about why they do something and get a reasonable response back, people will just put it back on the shelf, Riedl says. But those explanations, however soothing, prompt another question, he adds: How wrong can the rationalizations be before people lose trust?

Back at Uber, Yosinski has been kicked out of his glass box. Ubers meeting rooms, named after cities, are in high demand, and there is no surge pricing to thin the crowd. Hes out of Doha and off to find Montreal, Canada, unconscious pattern recognition processes guiding him through the office mazeuntil he gets lost. His image classifier also remains a maze, and, like Riedl, he has enlisted a second AI to help him understand the first one.

Researchers have created neural networks that, in addition to filling gaps left in photos, can identify flaws in an artificial intelligence.

PHOTOS: ANH NGUYEN

First, Yosinski rejiggered the classifier to produce images instead of labeling them. Then, he and his colleagues fed it colored static and sent a signal back through it to request, for example, more volcano. Eventually, they assumed, the network would shape that noise into its idea of a volcano. And to an extent, it did: That volcano, to human eyes, just happened to look like a gray, featureless mass. The AI and people saw differently.

Next, the team unleashed a generative adversarial network (GAN) on its images. Such AIs contain two neural networks. From a training set of images, the generator learns rules about imagemaking and can create synthetic images. A second adversary network tries to detect whether the resulting pictures are real or fake, prompting the generator to try again. That back-and-forth eventually results in crude images that contain features that humans can recognize.

Yosinski and Anh Nguyen, his former intern, connected the GAN to layers inside their original classifier network. This time, when told to create more volcano, the GAN took the gray mush that the classifier learned and, with its own knowledge of picture structure, decoded it into a vast array of synthetic, realistic-looking volcanoes. Some dormant. Some erupting. Some at night. Some by day. And some, perhaps, with flawswhich would be clues to the classifiers knowledge gaps.

Their GAN can now be lashed to any network that uses images. Yosinski has already used it to identify problems in a network trained to write captions for random images. He reversed the network so that it can create synthetic images for any random caption input. After connecting it to the GAN, he found a startling omission. Prompted to imagine a bird sitting on a branch, the networkusing instructions translated by the GANgenerated a bucolic facsimile of a tree and branch, but with no bird. Why? After feeding altered images into the original caption model, he realized that the caption writers who trained it never described trees and a branch without involving a bird. The AI had learned the wrong lessons about what makes a bird. This hints at what will be an important direction in AI neuroscience, Yosinski says. It was a start, a bit of a blank map shaded in.

The day was winding down, but Yosinskis work seemed to be just beginning. Another knock on the door. Yosinski and his AI were kicked out of another glass box conference room, back into Ubers maze of cities, computers, and humans. He didnt get lost this time. He wove his way past the food bar, around the plush couches, and through the exit to the elevators. It was an easy pattern. Hed learn them all soon.

See the article here:

How AI detectives are cracking open the black box of deep learning - Science Magazine

Posted in Ai | Comments Off on How AI detectives are cracking open the black box of deep learning – Science Magazine

This Startup Is Lowering Companies Healthcare Costs With AI – Entrepreneur

Posted: at 2:13 am

Healthcare costs are rapidly increasing. For companies that provide health insurance for their employees, theyve been getting hit with higher and higher premiums every year with no end in sight.

One Chicago-based startup experiencing explosive growth has been tackling this very problem. This company leverages artificial intelligence and chatbot technology to help employees navigate their health insurance and use less costly services. As a result, both the employee and employer end up saving money.

Justin Holland, CEO and co-founder of HealthJoy, has a strong grasp on how chatbots are going to change healthcare and save companies money in the process. I spoke with Holland to get his take on what CEOs need to know about their health benefits and how to contain costs.

Related:CanArtificial IntelligenceIdentify Pictures Better than Humans?

Whats the biggest problem with employer-sponsored health insurance? Why have costs gone up year after year faster than the rate of inflation?

One of the biggest issues for companies is that health insurance is kind of like giving your employees a credit card to go to a restaurant that doesnt have any prices. They are going to order whatever the waiter suggests to them that sounds good. Theyll order the steak and lobster, a bottle of wine and dessert. Employees have no connection to the actual cost of any of the medical services they are ordering. Several studies show that the majority of employees dont understand basic insurance terms needed to navigate insurance correctly. And its not their fault. The system is unnecessarily complex. Companies have finally started to realize that if they want to start lowering their healthcare costs, they need to start lowering their claims. The only way they are going to start doing that is by educating their employees and helping them to navigate the healthcare system. They need to provide advocates and other services that are always available to help.

Related:The Growth ofArtificial Intelligencein Ecommerce (Infographic)

Ive had an advocacy service previously that was just a phone number and I never used it. I actually forgot to use it all year and only remembered I had it when they changed my insurance plan and I saw the paperwork again. How is HealthJoy different?Is this where chatbots come in?

Phone-based advocacy services are great but youve identified their biggest problem: no one uses them. They are cheap to provide, so a lot of companies will bundle them in with their employee benefits packages, but they have zero ROI or utilization. Our chatbot JOY is the hub for a lot of different employee benefits including advocacy. JOYs main job is to route people to higher quality, less expensive care. She is fully supported by our concierge staff here in Chicago. They do things like call doctors offices to book appointments, verify network participation and much more. Our app is extremely easy to use and has been refined over the last three years to get the maximum engagement and utilization for our members.

Related:Why Tech Companies Are Pumping Money IntoArtificial Intelligence

Ive played around with your app. You offer a lot more than just an advocacy service. I see that you can also speak with a doctor in the app.

Yes, advocacy through JOY and our concierge team really is just the glue that binds our cost saving strategies. We also integrate telemedicine within the app so an employee can speak with a doctor 24/7 for free. This is another way we save companies money. We avoid those cases where someone needs to speak with a doctor in the middle of the night for a non-emergency and ends up at the emergency room or urgent care. Avoiding one trip to the emergency room can save thousands of dollars. Telemedicine has been around for a few years but, like advocacy, getting employees to use it has always been the big issue. Since we are the first stop for employee's healthcare needs, we can redirect them to telemedicine when it fits. We actually get over 50% of our telemedicine consults from when a member is trying to do something else. For example, they might be trying to verify if a dermatologist is within their insurance plan. Well ask them if they want to take a photo of an issue and have an instant consultation with one of our doctors. This is one of the reasons that employers are now seeing utilization rates that are sometimes 18X the industry standard. Redirecting all these consultations online is a huge savings to companies.

Related:4 WaysArtificial IntelligenceBoosts Workforce Productivity

What other services do you provide within the app?

We actually offer a lot of services and its constantly growing. Employers can even integrate their existing offerings as well. Healthcare is best delivered as a conversation, and thats why our AI-powered chatbot is perfect to service such a wide variety of offerings. The great thing is that its all delivered within an app that looks no more complex than Facebook Messenger or iMessage.

Right now we do medical bill reviews and prescription drug optimization. Well find the lowest prices for a procedure, help people with their health savings account and push wellness information. Our platform is like an operating system for healthcare engagement. The more we can engage with a company's employees for their healthcare needs, the more we can save both the employer and employees money.

Related:Artificial Intelligence- A Friend or Foe for Humans

It sounds like you're trying to build the Siri of healthcare, no?

In a way, yes. Basically, we are trying to help employers reduce their healthcare costs by providing their employees with an all-in-one mobile app that promotes smart healthcare decisions. JOY will proactively engage employees, connect them with our benefits concierge team and redirect to lower-cost care options like telemedicine. We integrate each client's benefits package and wellness programs to deliver a highly personalized experience that drives real ROI and improves workplace health.

So if a company wants to launch HealthJoy to their employees, do they need to just tell them to download your app?

We distribute HealthJoy to companies exclusively through benefits advisors, who are experts in developing plan designs and benefits strategies that work, both for employees and the bottom line. We always want HealthJoy to be integrated within a thoughtful strategy that leverages the expertise the benefits advisor provides, and we rely on them to upload current benefits and plan information.

Marsha is a Growth Marketing Expertbusiness advisor and speaker with specialism in international marketing.

Read more from the original source:

This Startup Is Lowering Companies Healthcare Costs With AI - Entrepreneur

Posted in Ai | Comments Off on This Startup Is Lowering Companies Healthcare Costs With AI – Entrepreneur

H2O.ai’s Driverless AI automates machine learning for businesses … – TechCrunch

Posted: at 2:13 am

Driverless AI is the latest product from H2O.ai aimed at lowering the barrier to making data science work in a corporate context. The tool assists non-technical employees with preparing data, calibrating parameters and determining the optimal algorithms for tackling specific business problems with machine learning.

At the research level, machine learning problems are complex and unpredictable combining GANs and reinforcement learning in a never before seen use case takes finesse. But the reality is that a lot of corporates today use machine learning for relatively predictable problems evaluating default rates with a support vector machine, for example.

But even these relatively straightforward problems are tough for non-technical employees to wrap their heads around. Companies are increasingly working data science into non-traditional sales and HR processes, attempting to train their way to costly innovation.

All of H2O.ais products help to make AI more accessible, but Driverless AI takes things a step further by physically automating many of the tough decisions that need to be made when preparing a model. Driverless AI automates feature engineering, the process by which key variables are selected to build a model.

H2O built Driverless AI with popular use cases built-in, but it cant solve every machine learning problem. Ideally it can find and tune enough standard models to automate at least part of the long tail.

The company alluded to todays release back in January when it launched Deep Water, a platform allowing its customers to take advantage of deep learning and GPUs.

Were still in the very early days of machine learning automation. Google CEOSundar Pichai generated a lot of buzz at this years I/O conference when he provided details on the companys efforts to create an AI tool that could automatically select the best model and characteristics to solve a machine learning problem with trial, error and a ton of compute.

Driverless AI is an early step in the journey of democratizing and abstracting AI for non-technical users. You can download the tool and start experimenting here.

The rest is here:

H2O.ai's Driverless AI automates machine learning for businesses ... - TechCrunch

Posted in Ai | Comments Off on H2O.ai’s Driverless AI automates machine learning for businesses … – TechCrunch