A Clever AI-Powered Robot Learns to Get a Grip – WIRED

You remember claw machines, those confounded scams that bilked you out of your allowance. They were probably the closest thing you knew to an actual robot, really. They're not, of course, but they do have something very important in common with legit robots: They're terrible at handling objects with any measure of dexterity.

You probably take for granted how easy it is to, say, pick up a piece of paper off a table. Now imagine a robot pulling that off. The problem is that a lot of robots are taught to do individual tasks really well, with hyper-specialized algorithms. Obviously, you cant get a robot to handle everything itll ever encounter by teaching it how to hold objects one by one.

Nope, thats an AIs job. Researchers at the University of California Berkeley have loaded a robot with an artificial intelligence so it can figure out how to robustly grip objects its never seen before, no hand-holding required. And thats a big deal if roboticists want to develop truly intelligent, dexterous robots that can master their environments.

The secret ingredient is a library of point clouds representing objects, data that the researchers fed into a neural network. The way it's trained is on all those samples of point clouds, and then grasps, says roboticist Ken Goldberg , who developed the system along with postdoc Jeff Mahler . So now when we show it a new point cloud, it says, This here is the grasp, and it's robust. Robust being the operative word. The team wasnt just looking for ways to grab objects, but the best ways.

Matt Simon

Baxter the Robot Fixes Its Mistakes by Reading Your Mind

Matt Simon

Finally, the Robot Bat We Deserve and the Robot Bat We Need

Cade Metz

MIT Researchers Want to Teach Robots How to Wash Dishes

Using this neural network and a Microsoft Kinect 3-D sensor, the robot can eyeball a new object and determine what would be a robust grasp. When its confident its worked that out, it can execute a good grip 99 times out of 100.

It doesn't actually even know anything about that the object is, Goldberg says. It just says it's a bunch of points in space, here's where I would grasp that bunch of points. So it doesnt matter if it's a crumpled up ball of tissue or almost anything."

Imagine a day when robots infiltrate our homes to help with chores, not just vacuuming like Roombas but doing dishes and picking up clutter so the elderly dont fall and find themselves unable to get up . The machines are going to come across a whole lot of novel objects, and you, dear human, cant be bothered to teach them how to grasp the things. By teaching themselves, they can better adapt to their surroundings. And precision is pivotal here: If a robot is doing dishes but can only execute robust grasps 50 times out of 100, youll end up with one embarrassed robot and 50 busted dishes.

Heres where the future gets really interesting. Robots wont be working and learning in isolationtheyll be hooked up to the cloud so they can share information. So say one robot learns a better way to fold a shirt. It can then distribute that knowledge to other robots like it and even entirely different kinds of robots . In this way, connected machines will operate not only as a global workforce, but as a global mind.

At the moment, though, robots are still getting used to our world. And while Goldbergs new system is big news, it aint perfect. Remember that the robot is 99 percent precise when its already confident it can manage a good grip. Sometimes it goes for the grasp even when it isnt confident, or it just gives up. So one of the things we're doing now is modifying the system, Goldberg says, and when it's not confident rather than just giving up it's going to push the object or poke it, move it some way, look again, and then grasp.

Fascinating stuff. Now if only someone could do something about those confounded claw machines.

Follow this link:

A Clever AI-Powered Robot Learns to Get a Grip - WIRED

4 AI & Cybersecurity Stocks to Gain From the New Normal – Yahoo Finance

With technological advancements driving businesses and customers online, AI and cybersecurity seem to have become a necessity. Additionally, the remote working trend due to the COVID-19 pandemic has boosted demand for cybersecurity.

AI in particular promises to change cybersecurity in the coming years by possibly enhancing both cyber defense and crime.

Is AI Influencing Cybersecurity?

In todays time, almost everything is just a click away, be it clothes, groceries, investment in stocks, or just catching up with friends. However, as easy as it sounds, it has its own share of security threat. This has driven businesses to deploy cyber AI to not only protect themselves but also their customers.

In fact, cybersecurity has been a concern since the birth of Internet. And as technological evolution continues, the risks keep rising. In 2014, Yahoo! encountered a cyberattack affecting 500 million user accounts and 200 million usernames sold. This holds the record of being the largest cyberattack on a single company to date.

Now, what role does AI play in cybersecurity? AI can identify and prevent cyberattacks.In fact, AI ensures minimum human involvement in cybersecurity affairs, reducing the scope for errors. AI-based websites can detect any sort of unauthorized entry, making it difficult for hackers to get access.

However, just identifying the threat cannot save a website from cyber attackers. AI can now help prevent a cyberattack by thinking the way a hacker does to break a security code and gain entry to the target website. Hence, before the hacker identifies a weaker point of attack, AI can assess the situation and make the necessary changes.

Additionally, AI doesnt require any break because it is programmed to deal with high-risk tasks without any concern. Furthermore, AI has undergoing rapid change and progress in recent years. It can now not only offer mere technical assistance to cybersecurity experts but also track down cyber security breach and inform appointed personals to take apt measures to rectify the same in no time.

The growth of AI in cybersecurity has been commendable in recent years with massive digitalization across the globe.Further, the coronavirus pandemic has forced employers to initiate work-from-home practices, driving adoption of IoT and the number of connected devices.

Hence, rising instances of cyberattacks, growing concerns of data transfer and vulnerability of wireless networks to attacks on data integrity have expanded the scope of AI in cybersecurity. Annual spending on vulnerability management activities increased to $1.4 million in 2019, highlighting an increase of an average of $282,750 from 2018.

5 Stocks to Watch Out For

Per a Capgemini Research Institute study, one in five cybersecurity firms were employing AI before 2019 but the adoption is likely to skyrocket by the end of 2020. In fact, 63% of the firms are planning to deploy AI in their solutions.

Per MarketsAndMarkets research, the global AI in cybersecurity market is projected to reach $38.2 billion by 2026 from $8.8 billion in 2019, at the highest CAGR of 23.3%. In fact, research suggests that the market is estimated to be valued at $12 billion by the end of 2020.

Given the immense scope, it is prudent to invest in AI-drivencybersecurity stocks. We have,thus, shortlisted four such stocks that are poised to grow.

CrowdStrike Holdings, Inc. CRWD provides cloud-native endpoint protection software. The Falcon platform automatically investigates threats and takes the guesswork out of threat analysis. The company has an expected earnings growth rate of 94.4% for the current quarter against the ZacksInternet - Softwareindustrys projected decline of more than 100%.

The Zacks Consensus Estimate for its current-year earnings has climbed 50% over the past 60 days. CrowdStrike holds a Zacks Rank #2(Buy).You can seethe complete list of todays Zacks #1 (Strong Buy) Rank stocks here.

Fortinet, Inc. FTNT provides security solutions to all parts of IT infrastructure. The companys AI-based product, FortiWeb, is a web application firewall that uses machine learning and two layers of statistical probabilities to accurately detect threats.

The company has an expected earnings growth rate of 12.6% for the current year against the ZacksSecurityindustrys estimated decline of 15.3%. The Zacks Consensus Estimate for its current-year earnings has moved 6.1% up over the past 60 days. Fortinet holds a Zacks Rank #2.

Story continues

Palo Alto Networks, Inc. PANW offers firewalls and cloud security for threat detection and endpoint protection. The company that belongs to the ZacksSecurityindustry has an expected earnings growth rate of 15.2% for the next quarter. The Zacks Consensus Estimate for its current-year earnings has moved 6.5% up over the past 60 days. Palo Alto Networks carries a Zacks Rank #3 (Hold).

Check Point Software Technologies Ltd. CHKP provides computer and network security solutions to governments and enterprises. Its IntelliStore provides customizable threat intelligence, letting companies and organizations choose real-time threat intelligence sources that fit their needs.

The company has an expected earnings growth rate of 3.4% for the current year against the ZacksSecurityindustrys estimated decline of 15.3%. The Zacks Consensus Estimate for its next-year earnings has moved 0.2% up over the past 60 days. Check Point Software holds a Zacks Rank #3.

Breakout Biotech Stocks with Triple-Digit Profit Potential

The biotech sector is projected to surge beyond $775 billion by 2024 as scientists develop treatments for thousands of diseases. Theyre also finding ways to edit the human genome to literally erase our vulnerability to these diseases.

Zacks has just released Century of Biology: 7 Biotech Stocks to Buy Right Now to help investors profit from 7 stocks poised for outperformance. Our recent biotech recommendations have produced gains of +50%, +83% and +164% in as little as 2 months. The stocks in this report could perform even better.

See these 7 breakthrough stocks now>>

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free reportCheck Point Software Technologies Ltd. (CHKP) : Free Stock Analysis ReportFortinet, Inc. (FTNT) : Free Stock Analysis ReportPalo Alto Networks, Inc. (PANW) : Free Stock Analysis ReportCrowdStrike Holdings Inc. (CRWD) : Free Stock Analysis ReportTo read this article on Zacks.com click here.Zacks Investment Research

Here is the original post:

4 AI & Cybersecurity Stocks to Gain From the New Normal - Yahoo Finance

Wyze will try pay-what-you-want model for its AI-powered person detection – The Verge

Smart home company Wyze is experimenting with a rather unconventional method for providing customers with artificial intelligence-powered person detection for its smart security cameras: a pay-what-you-want business model. On Monday, the company said it would provide the feature for free as initially promised, after it had to disable it due to an abrupt end to its licensing deal with fellow Seattle-based company Xnor.ai, which was acquired by Apple in November of last year. But Wyze, taking a page out of the old Radiohead playbook, is hoping some customers might be willing to chip in to help it cover the costs.

AI-powered person detection uses machine learning models to train an algorithm to differentiate between the movement of an inanimate object or animal and that of a human being. Its now a staple in the smart security camera market, but it remains rather resource-intensive to provide and expensive as a result. It is more expensive than Wyze at first realized, in fact. Thats a problem after the company promised last year that when its own version of the feature was fully baked, it would be available for free without requiring a monthly subscription, as many of its competitors do for similar AI-powered functions.

Yet now Wyze says its going to try a pay-what-you-want model in the hopes it can use customer generosity to offset the bill. Heres how the company broke the good (and bad) news in its email to the customers eligible for the promotion, which includes those that were enjoying person detection on Wyze cameras up until the Xnor.ai contract expired at the end of the year:

Over the last few months, weve had this service in beta testing, and were happy to report that the testing is going really well. Person Detection is meeting our high expectations, and its only going to keep improving over time. Thats the good news.

The bad news is that its very expensive to run, and the costs are recurring. We greatly under-forecasted the monthly cloud costs when we started working on this project last year (weve also since hired an actual finance guy). The reality is we will not be able to absorb these costs and stay in business.

Wyze says that while it would normally charge a subscription for a software service that involves recurring monthly costs, it told about 1.3 million of its customers that it would not charge for the feature when it did arrive, even if it required the company pay for pricey cloud-based processing. We are going to keep our promise to you. But we are also going to ask for your help, Wyze writes.

It sounds risky, and Wyze admits that the plan may not pan out:

When Person Detection for 12-second event videos officially launches, you will be able to name your price. You can select $0 and use it for free. Or you can make monthly contributions in whatever amount you think its worth to help us cover our recurring cloud costs. We will reevaluate this method in a few months. If the model works, we may consider rolling it out to all users and maybe even extend it to other Wyze services.

If Wyze is able to recoup its costs by relying on the goodwill of customers, it could set the company up to try more experimental pricing models. After all, radical pricing strategies and good-enough quality is how Wyze became a bit of a trailblazer in the smart home camera industry, and it could work out for them again if customers feel like the feature works so well it warrants chipping in a few bucks a month.

View post:

Wyze will try pay-what-you-want model for its AI-powered person detection - The Verge

Discover the Power of VUNO’s AI Solutions at ECR 2020 – WFMZ Allentown

SEOUL, South Korea, July 14, 2020 /PRNewswire/ -- VUNO Inc., South Korean artificial intelligence (AI) developerannounced that it will attend the European Congress of Radiology 2020 (ECR 2020)to be held fromJuly 15 to July 19, 2020 to showcase the power of its flagship AI radiology solutions that have recently received the CE mark.As part of its ambitious plan to go global, VUNO is set to seize this opportunity to further expand its networks with sales prospects and business partners from all around the world.

ECR, one of the leading events in the field of radiology, will be joined by industry experts, medical & healthcare professionals, modality manufacturers and solutions developers, etc. In the light of the COVID 19 epidemic, the congress organizers have decided to opt for an online-only event. With about 2,100 leading industry representatives onboard, the exhibition of ECR 2020 can be accessed from 08:00 a.m. July 15 to 21, until 11:55 p.m. CEST. Free registration & participation is available on the ECR 2020 Virtual Exhibition website. (https://ecr2020.expo-ip.com/).

VUNO's exhibition at the event will include VUNO MedLungCT AI that detects, locates, and quantifies pulmonary lung nodules on CT images, VUNO Med-Chest X-Ray that assists in the chest X-ray readings of common thoracic abnormalities on chest radiographs. VUNO Med-DeepBrain is a diagnostic supporting tool for degenerative brain diseases through brain parcellation & quantification on brain MR images.

On top of the three solutions to be showcased at this event, VUNO has two other solutions that have gained CE certifications recently- VUNO Med-BoneAge and VUNO Med Fundus AI. All these five products are now available to be marketed in countries where CE marking is acceptable.

VUNO Med solutions are designed to be agnostic to any devices and any environments offering seamless integration with any PACS and/or EMR systems. They are offered via cloud servers allowing users to analyze images anytime, anywhere with access to the Internet. They are also available through on-premise installations as well.

VUNO has the highest number of clients with more than120 medical institutions in Korea alone. With its huge successes under its belt rooted in the proven effectiveness and safety through clinical trials and practices, the company is embarking on a new endeavor to proclaim its technical prowess in overseas markets now by signing partnerships with global healthcare giants like M3, a SONY subsidiary and Japan's largest data platform company.

For more detailed information on VUNO, visit https://www.vuno.co/.

Read the original:

Discover the Power of VUNO's AI Solutions at ECR 2020 - WFMZ Allentown

How to create real competitive advantage through business analytics and ethical AI – UNSW Newsroom

Some Australian organisations, which either feature large data science teams or are born digital with a data-driven culture, have advanced analytics capabilities (such as undertaking predictive and prescriptive analytics). For example, dedicated data science teams in marketing will build neural network models to predict customer attrition and the success of cross-selling and up-selling. However, most organisations that use data in their decision-making primarily rely on descriptive analytics.

While descriptive analytics may seem simplistic compared to creating predictions and running optimisation algorithms, descriptive analytics offers firms tremendous value by providing an accurate and up-to-date view of the business. For most organisations, analytics which may even be labelled as advanced analytics takes the form of dashboards; and, for many organisational tasks, understanding trends and the current state of the business is sufficient to make evidence-based decisions.

Moreover, dashboards provide a foundation for creating a more data-driven culture and are the first step for many organisations in their analytics journey. That said, by strictly relying on dashboards, organisations are missing opportunities for leveraging predictive analytics to create competitive advantages.

Despite the importance of analytics, firms are at different stages of their analytics journey. Some firms utilise suites of complex artificial intelligent technologies, while many others still use Microsoft Excel as their main platform for data analysis. Unfortunately, the process of obtaining organisational value from analytics is far from trivial, and the organisational benefits provided by analytics are almost equalled by the challenges required for successful implementation.

My colleagueProf. Richard Vidgenrecentlyundertook a Delphi studyto reach a consensus on the importance of key challenges in creating value from big data and analytics. Managers overwhelmingly agreed that there were two significant issues. The first is the wealth of issues related to data: assuring data quality, timeliness and accuracy, linking data to key decisions, finding appropriate data to support decisions and issues pertaining to databases.

The second set of challenges pertains to people: building data skills in the organisation,upskilling current employees to utilise analytics, massive skill shortages across both analytics and the IT infrastructure supporting analytics, and building a corporate data culture (which includes integrating data into the organisations strategy). While issues related to data quality are improving, the skill gap and lack of emphasis on data-driven decision making are systemic issues that will require radical changes in Australian education and Australian corporate culture.

Although there are many interesting trends in terms of the advancements of analytics like automated machine learning platforms (such as DataRobot and H2O), the greatest challenge with analytics and AI is going to be ensuring their ethical use.

Debate and governance around data usage are still in their infancy, and with time, analytics, black-box algorithms, and AI are going to come under increasing scrutiny. For example, Australias recent guidelines on ethical AI, where AI can be thought of as a predictive outcome created by an algorithm or model, include:

Achieving these goals with standard approaches to analytics is a challenging enough endeavour for organisations, due to the black-box nature of analytics, algorithms and AI. However, decisions driven by algorithms and analytics are now increasingly interacting with other organisations AI, which makes it even more difficult to predict the fairness and explainability of outcomes. For example, AI employed by e-commerce retailers to set prices can participate in collusion and driving up prices by mirroring and learning from competing AIs behaviours without human interference, knowledge or explicit programming for collusion.

As predictive analytics and AI will fundamentally transform almost all industries, it is critical that organisations adapt ethically. Organisations should implement frameworks to guide the use of AI and analytics, which explicitly incorporate fairness, transparency, explainability, contestability, and accountability.

A significant aspect of undertaking ethical AI and ethical analytics is optimising and selecting models and algorithms that incorporate ethical objectives. Typically, analytics professionals often select models based on their ability to make successful predictions on validation and hold-out data (that is, data that the model has never seen). However, rather than simply looking at prediction accuracy, analysts should incorporate issues related to transparency. For example, decision trees, which are a collection of if-then rules that connect branches of a tree, have simple structures and interpretations. They are highly visual, which enables analysts to easily convey the underlying logic and features that drive predictions to stakeholders.

Moreover, business analytics professionals can carefully scrutinise the nodes of the decision tree to determine if the criteria for the decision rules built into the model are ethical. Thus, rather than using advanced neural networks which often provide higher accuracy to models like decision trees but are effectively black-boxes, analysts should consider sacrificing slightly on performance in favour of transparency offered by simpler models.

AGSM Scholar and Senior Lecturer Sam Kirshner, UNSW Business School.

Sam Kirshner is a Senior Lecturer in the School of Information Systems and Technology Managementat UNSW Business School andis a member of the multidisciplinary Behavioural Insights for Business and Policy and Digital Enablement Research Networks.

For the full story visitUNSW Business Schools BusinessThink.

Here is the original post:

How to create real competitive advantage through business analytics and ethical AI - UNSW Newsroom

A Groundbreaking New AI Taught Itself to Speak in Just a Few Hours – Futurism

Giving Machines a Voice

Last year, Google successfully gave a machine the ability to generate human-like speech through its voice synthesis program called WaveNet. Powered by Googles DeepMind artificial intelligence (AI) deep neural network, WaveNet produced synthetic speech using given texts. Now, Chinese internet search company Baidu has developed the most advanced speech synthesis program ever, and its called Deep Voice.

Developed in Baidus AI research lab based in Silicon Valley, Deep Voice presents a big breakthrough in speech synthesis technology by largely doing away with the behind-the-scenes fine-tuning typically necessary for suchprograms. As such, Deep Voice can learn how to talk in a matter of a few hours and with virtually no help from humans.

Deep Voice uses a relatively simple method: through deep-learning techniques, Deep Voice broke down texts into phonemes which is sound at its smallest perceptually distinct units. A speech synthesis network then reproduced these sounds. The need for any fine-tuning was greatly reduced because every stage of the process relied on deep-learning techniques all researches needed to dowas train the algorithm.

For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original, the Baidu researchers wrote in a study published online. By using a neural network for each component, our system is simpler and more flexible than traditional text-to-speech systems, where each component requires laborious feature engineering and extensive domain expertise.

Text-to-speech systems arent entirely new. Theyre present in many of the worlds modern gadgets and devices.From simpler ones like talking clocks and answering systems in phones to more complex versions, like those in navigation apps. These, however, have been made using large databases of speech recordings. As such, the speech generated by these traditional text-to-speech systems dont flowas seamless as actual human speech.

Baidus work on Deep Voice is a step towards achieving human-like speech synthesis in real time, without using pre-recorded responses. Baidus Deep Voice puts together phonemes in such a way that it sounds like actual human speech. We optimize inference to faster-than-real-time speeds, showing that these techniques can be applied to generate audio in real-time in a streaming fashion, their researchers said.

However, there are still certain variables that their new system cannot yet control: the stresses on phonemes and the duration and natural frequency of each sound. Once perfected, control of these variables would allow Baidu to change the voice of the speaker and, possibly, the emotions conveyed by a word.

At the very least, this would be computationally demanding, limiting just how much Deep Voice can be used in real-time speech synthesis in the real world. As thethe Baidu researchersexplained:

In the future, better synthesized speech systems can be used to improvethe assistant features found in smartphones and smart home devices. At the very least, it wouldmake talking to your devices feel more real.

See the original post:

A Groundbreaking New AI Taught Itself to Speak in Just a Few Hours - Futurism

AI for Quitting Tobacco Initiative – World Health Organization

Meet Florence, WHO's first virtual health worker, designed to help the world's 1.3 billion tobacco users quit.She uses artificial intelligence to dispel myths around COVID-19 and smoking and helps people develop a personalized plan to quit tobacco.

Users can rely on Florenceas a trusted source of information to achieve their quit goals. She can also help recommend tobacco users to further national toll-free quit lines or apps that can help you with your quit journey. You can interact with her via video or text.

Around 60% of tobacco users worldwide say they want to quit, only 30% of them have access to the tools they need, like counsellors, to take action.

Quitting smoking is more important than ever as evidence reveals that smokers are more vulnerable than non-smokers to developing a severe case of COVID-19.

Florence develops your quit plan using the 'STAR' method:

Set a quit date.It is important to set a quit date as soon as possible. Giving yourself a short period to quit will keep you focused and motivated to achieve your goal.

Tell your friends, family, and coworkers.It is important to share your goal to quit with those you interact frequently.

Anticipate challenges to the upcoming quit attempt.Particularly during the critical first few weeks, which arethe hardest due to potential nicotine withdrawal symptoms as well as the obstacles presented by breaking any habit.

Remove tobacco products from your environment.Its best to rid yourself of such temptations by making a smoke free house, avoiding smoking areas, and asking your peers to not smoke around you.

Florence was created with technology developed by San Francisco and New Zealand based Digital People company Soul Machines, with support from Amazon Web Services and Google Cloud.

Read the original:

AI for Quitting Tobacco Initiative - World Health Organization

The AI Revolution Is Here – A Podcast And Interview With Nate Yohannes – Forbes

Nates perspective on AI being built for everybody on the planet is birthed from one of the most unique foundations possible. Hes the offspring of a revolutionary who stepped on a landmine in 1978 fighting for democracy on the Horn of Africa, Eritrea, one of the worst violators of human rights in the world to becoming a lawyer who was then appointed by President Obama to serve on behalf of the White House. Nates father losing much of his vision in the landmine attack was the catalyst for his passion for AI Computer Vision; computers reasoning over people, places and things.

Nate Yohannes AI, Microsoft

His role at Microsoft AI merges the world of business and product strategy, while he works closely with Microsofts AI Ethics & Society team. Nate believes that Microsofts leadership decision to embed Ethics & Society into engineering teams is one of the most durable advantages they offer design products with the filter of ethics up front is unique and valuable for everyone. AI is the catalyst for the fourth industrial revolution - the most significant technological advancement thus far and, AI has the potential to solve incredible challenges for all of humanity (climate, education, design, customer experiences, governance, food, etc.). The biggest concern could be the potential for un-expected and un-intended consequences when building and deploying AI products. Very similar to the unintended consequences we see today with social media companies and the misuse of privacy and data. AI will change the world, how it does this is our choice. Its critical to have appropriate representation at decision making tables when building AI products to mitigate thousands or millions of unexpected consequences potentially. From gender and race to financial, health and even location-based data. Solving this challenge of the unexpected consequences and incorporating inclusivity shouldnt hinder innovation and the ambition from maximizing revenue; instead, it should enhance it. Creating products that will have the most extensive consumer base possible, everyone. Its an inspiring conversation about how to make the possible a reality with a different mindset.

This should be a guiding light for how all companies develop AI for the highest good (not just greater good). If every company or even the government will be a digital platform by 2030, OK, 75% of us will be, then AI will sit at the center of these organizations.

Nate Yohannes Speaking to AI.

Doing it the right way is part of the puzzle. Thinking more about how it can be applied to the whole world is the tantalizing promise. Nate Yohannes is a Principal Program Manager for Mixed Reality & AI Engineering at Microsoft. He recently was a Director of Corporate Business Development & Strategy for AI, IoT & Intelligent Cloud. Hes on the Executive Advisory Board of the Nasdaq Entrepreneurial Center and an Expert for MITs Inclusive Innovation Challenge. From 2014 2017, he served in President Obamas administration as the Senior Advisor to the Head of Investments and Innovation, US Small Business Administration and on the White House Broadband Opportunity Council.

Nate was selected for the inaugural White House Economic Leadership class. He started his career as the Assistant General Counsel at the Money Management Institute. He is a graduate of the State University of New York College at Geneseo and of the University of Buffalo School of Law, where he was a Barbara and Thomas Wolfe Human Rights Fellow. Hes admitted to practice law in New York State.

Read more from the original source:

The AI Revolution Is Here - A Podcast And Interview With Nate Yohannes - Forbes

AI, machine learning to impact workplace practices in India: Adobe – YourStory.com

Over 60 percent of marketers in India believe new-age technologies are going to impact their workplace practices and consider it the next big disruptor in the industry, a new report said on Thursday.

According to a global report by software major Adobe that involved more than 5,000 creative and marketing professionals across the Asia Pacific (APAC) region, over 50 percent respondents did not feel concerned by artificial intelligence (AI) or machine learning.

However, 27 percent in India said they were extremely concerned about the impact of these new technologies.

Creatives in India are concerned that new technologies will take over their jobs. But they suggested that as they embrace AI and machine learning, creatives will be able to increase their value through design thinking.

While AI and machine learning provide an opportunity to automate processes and save creative professionals from day-to-day production, it is not a replacement to the role of creativity, said Kulmeet Bawa, Managing Director, Adobe South Asia.

It provides more levy for creatives to spend their time focusing on what they do best being creative, scaling their ideas and allowing them time to focus on ideation and creativity, Bawa added.

A whopping 59 percent find it imperative to update their skills every six months to keep up with the industry developments.

The study also found that merging online and offline experiences was the biggest driver of change for the creative community, followed by the adoption of data and analytics, and the need for new skills.

It was revealed that customer experience is the number one investment by businesses across APAC.

Forty-two per cent of creatives and marketers in India have recently implemented a customer experience programme, while 34 percent plan to develop one in the one year.

The study noted that social media and content were the key investment areas by APAC organisations, and had augmented the demand for content. However, they also presented challenges.

Budgets were identified as the biggest challenge, followed by conflicting views and internal processes. Data and analytics become their primary tool to ensure that what they are creating is relevant, and delivering an amazing experience for customers, Bawa said.

More here:

AI, machine learning to impact workplace practices in India: Adobe - YourStory.com

How to make AI less racist – Bulletin of the Atomic Scientists

CaptionBot, an AI program that applies captions to images, mistakenly described the members of the hip hop group the Wu-Tang Clan as a group of baseball players. This type of mistake often occurs because of the way certain demographics are represented in the data used to train an AI system. Credit: Walter Scheirer/CaptionBot.

In 2006, a trio of artificial intelligence (AI) researchers published a useful resource for their community, a massive dataset consisting of images representing over 50,000 different noun categories that had been automatically downloaded from the internet. The dataset, dubbed Tiny Images, was an early example of the big data strategy in AI research, whereby an algorithm is shown as many examples as possible of what it is trying to learn in order for it to better understand a given task, like recognizing objects in a photo. By uploading small 32-by-32 pixel images, the Tiny Images researchers were relying on the ability of computers to exhibit the same remarkable tolerance of the human visual system and recognize even degraded images. They also, however, may have unintentionally succeeded in recreating another human characteristic in AI systems: racial and gender bias.

A pre-print academic paper revealed that Tiny Images used several categories for images labeled with racial and misogynistic slurs. For instance, a derogatory term for sex workers was one category; a slur for women, another. There was also a category of images labeled with a racist term for Black people. Any AI system trained on the dataset might recreate the biased categories as it sorted and identified objects. Tiny Images was such a large dataset and its contents so small, that it would have been a herculean, perhaps impossible, task to perform quality control to remove the offensive category labels and images. The researchers, Antonio Torralba, Rob Fergus, and Bill Freeman, made waves in the artificial intelligence world when they announced earlier this summer that they would be pulling the whole thing from public use.

Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community precisely those that we are making efforts to include, Torralba, Fergus, and Freeman wrote. It also contributes to harmful biases in AI systems trained on such data.

Developers used the Tiny Images data as raw material to train AI algorithms that computers use to solve visual recognition problems and recognize people, places, and things. Smartphone photo apps, for instance, use similar algorithms to automatically identify photos of skylines or beaches in your collection. The dataset was just one of many used in AI development.

While the pre-prints shocking findings about Tiny Images was troubling in its own right, the issues the paper highlighted are also indicative of a larger problem facing all AI researchers: the many ways in which bias can creep into the development cycle.

The big data era and bias in AI training data. The internet reached an inflection point in 2008 with the introduction of Apples iPhone3G. This was the first smartphone with viable processing power for general internet use, and, importantly, it included a digital camera that could be available to the user at a moments notice. Many manufacturers followed Apples lead with similar competing devices, bringing the entire world online for the first time. A software innovation that appeared with these smartphones was the ability of an app running on a phone to easily share photos from the camera to privately owned cloud storage.

This was the launch of the era of big data for AI development, and large technology companies had an additional motive beyond creating a good user experience: By concentrating a staggering amount of user-generated content within their own platforms, companies could exploit the vast repositories of data they collected to develop other products. The prevailing wisdom in AI product development has been that if enough data is collected, any problem involving intelligence can be solved. This idea has been extended to everything from face recognition to self-driving cars, and is the dominant strategy for attempting to replicate the competencies of the human brain in a computer.

But using big data for AI development has been problematic in practice.

The datasets used for AI product development now contain millions of images, and nobody knows what exactly is in them. They are too large to examine manually in an exhaustive manner. When it comes to the use of these sets, the data can be labeled or unlabeled. If the data is labeled (as was the case with Tiny Images), those labels can be tags that were taken from the original source, new labels assigned by volunteers or people who have been paid to provide them, or automatically generated by an algorithm trained to label data.

Dataset labels can be naturally bad, reflecting the biases and outright malice of the humans who annotated them, or artificially bad, if the mistakes are made by algorithms. A dataset could even be poisoned by malicious actors intending to create problems for any algorithms that make use of it. Additionally, some datasets contain unlabeled data, but these datasets, used in conjunction with algorithms that are designed to explore a problem on their own, arent the antidote for poorly labeled data. As is also the case for labeled datasets, the information contained within unlabeled datasets can be a mismatch with the real world, for instance when certain demographics are under- or over-represented in the data.

In 2016, Microsoft released a web app called CaptionBot that automatically added captions to images. The app was meant to be a successful demonstration of the companys computer vision API for third-party developers, but even under normal use, it could make some dubious mistakes. In one notable instance, it mislabeled a photo of the hip hop group Wu-Tang Clan as a group of baseball players. This type of mistake can occur when a bias exists in the dataset for a specific demographic. For example, Black athletes are often overrepresented in datasets assembled from public content on the internet. The prominent Labeled Faces in the Wild dataset for face recognition research contains this form of bias.

More problematic examples of this phenomenon have surfaced. Joy Buolamwini, a scholar at the MIT Media Lab, and Timnit Gebru, a researcher at Google, have shown that commonly used datasets for face recognition algorithm development are overwhelmingly composed of lighter skinned faces, causing face recognition algorithms to have noticeable disparities in matching accuracy between darker and lighter skin tones. Buolamwini, working with Deborah Raji, a student in her laboratory, has gone on to demonstrate this same problem in Amazons Rekognition face analysis platform. These studies have prompted IBM to announce it would exit the facial recognition and analysis market and Amazon and Microsoft to halt sales of facial recognition products to law enforcement agencies, which use the technology to identify suspects.

The manifestation of bias often begins with the choice of an application, and further crops up in the design and implementation of an algorithm well before any training data is provided to it. For instance, while its not possible to develop an algorithm to predict criminality based on a persons face, algorithms can seemingly produce accurate results on this task. This is because the datasets they are trained on and evaluated with have obvious biases, such as mugshot photos that contain easily identifiable technical artifacts specific to how booking photos are taken. If a development team believes the impossible to be possible, then the damage is already done before any tainted data makes its way into the system.

There are a lot of questions AI developers should consider before launching a project. Gebru and Emily Denton, also a researcher at Google, astutely point out in a recent tutorial they presented at the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition that interdisciplinary collaboration is a good path forward for the AI development cycle. This includes engagement with marginalized groups in a manner that fosters social justice, and dialogue with experts in fields outside of computer science. Are there unintended consequences of the work being undertaken? Will the proposed application cause harm? Does a non-data driven perspective reveal that what is being attempted is impossible? These are some of the questions that need to be asked by development teams working on AI technologies.

So, was it a good thing for the research team responsible for Tiny Images to issue a complete retraction of the dataset? This is a difficult question. At the time of this writing, the published paper on the dataset has collected over 1,700 citations according to Google Scholar and the dataset it describes is likely still being used by those who have it within their possession. The retraction of Tiny Images creates a secondary problem for researchers still working with algorithms developed with Tiny Images, many of which are not flawed and worthy of further study. The ability of researchers to replicate a study is a scientific necessity, and by removing the necessary data, this becomes impossible. And an outright ban on technologies like face recognition may not be a good idea. While AI algorithms can be anti-social in contextslikepredictive policing, they can be socially acceptable in others, such as human-computer interaction.

Perhaps one way to address this is to issue a revision of a dataset that removes any problematic information, notes what has been removed from the previous version, and provides an explanation for why the revision was necessary. This may not completely remove the bias within a dataset, especially if it is very large, but it is a way to address specific instances of bias as they are discovered.

Bias in AI is not an easy problem to eliminate, and developers need to react to it in a constructive way. With effective mitigation strategies, there is certainly still a role for datasets in this type of work.

See the original post:

How to make AI less racist - Bulletin of the Atomic Scientists

Andrew Ng announces Deeplearning.ai, his new venture after … – TechCrunch

Andrew Ng, the former chief scientist of Baidu, announced his next venture, Deeplearning.ai, with only a logo, a domain name and a footnote pointing to an August launch date. In an interesting twist, the Deeplearning.ai domain name appears to be registered to Baidus Sunnyvale AI research campus the same office Ng would have worked out of as an employee.

Its unclear whether Ng began his work on Deeplearning.ai while still an employee at Baidu. According to data pulled from theWayback Machine, the domain was parked at Instra and picked up sometime between 2015 and 2017.

Registering that domain to Baidu accidentally would be an amateur mistake and registering it intentionally just leaves me with more unanswered questions. Im left wondering about the relationship between Baidu and Deeplearning.ai and its connection to Andrew Ngs departure. Of course, its also possible that there was some sort of error that caused an untimely mistake.

UPDATE: Baidu provided us the following response.

Baidu has no association with this project but we wish Andrew the best in his work.

Ng left the company in late March of this year, promising to continue his work of bringing the benefits of AI to everyone. Baidu is known for having unique technical expertise in natural language processing and its recently been putting resources into self-driving cars and other specific deep learning applications.

It makes sense that Ng would take advantage of his name recognition to raise a large round to maximize his impact on the machine intelligence ecosystem. I cant see a general name like Deeplearning.ai being used to sell a self-driving car company or a verticalized enterprise tool. Its more likely that Ng is building an enabling technology that aims to become critical infrastructure to support the adoption of AI technologies.

While this could technically encompass specialized hardware chips for deep learning, Im more inclined to bet that it is a software solution given Ngs expertise. Google CEO Sundar Pichai made a splash back at I/O last month when he discussed AutoML the companys research work to automate the design process of neural networks. If I was going to come up with a name for a company that would build on, and ultimately commercialize, this technology, it would be Deeplearning.ai.

This is super speculative, but I think it might be an AI tool to help generate AI training data sets or something else that will accelerate the development of AI models and products, Malika Cantor, partner at AI investment firm Comet Labs told me. Im very excited about having more tools and platforms to support the AI ecosystem.

Prior to his time at Baidu, Ng was instrumental in building out the Google Brain Team, one of the companys core AI research groups.Ng is a highly respected researcher and evangelist in the AI space with connections spanning industries and geographic borders. If Ng truly believes that AI is the new electricity, he will surely try to position Deeplearning.ai to take advantage of the windfall.

Weve reached out to both Baidu and Andrew Ng and will update this post if we receive additional information.

View original post here:

Andrew Ng announces Deeplearning.ai, his new venture after ... - TechCrunch

Hired Partners with Fiddler to Pioneer Responsible AI for Hiring – PRNewswire

PALO ALTO, Calif., July 30, 2020 /PRNewswire/ --Today Hired announced a partnership with Fiddler to increase transparency and mitigate bias in its tech talent marketplace, which uses its proprietary AI Intelligent Job Matching technology to match candidates with relevant jobs at the world's most innovative companies. All of Hired's AI was built in-house, and the company is layering Fiddler's Explainable AI Platform on top of its proprietary models to generate deeper insights into how its algorithms make decisions.

"At Hired, our mission is to match people with a job they love, and doing that at scale requires advanced technology like AI. Fiddler helps enhance our understanding of the AI algorithms at the heart of this candidate matching process by comparing these insights and explanations with our internally developed solutions to empower our data science and curation teams," said Mehul Patel, CEO of Hired. "With Explainability, our team can build trust with our algorithms and further our commitment to building a truly equitable future. I am excited about Fiddler making AI decisions more explainable across the industry."

One of the biggest promises of AI is its ability to make objective, data-driven decisions, but without visibility into how these algorithms work, businesses run the risk of using sub-par models that could actually increase bias. Fiddler's Explainable AI Platform monitors, explains, and analyzes model performance to provide businesses with rich explanations of exactly why a model generated a particular output, and immediately flag any potential bias.

Fiddler's technology integrates seamlessly with Hired's user interfaces to provide real-time reporting on how their AI models are working. It's specifically used for the following purposes:

"We're excited to partner with Hired to continue our mission of building trustworthy and reliable AI, which was a key driver in this partnership," said Krishna Gade, founder & CEO of Fiddler. "The ability for explainable AI and ML monitoring, and specifically Fiddler's solution to add value to Hired's team to build trust is something that our team is very passionate about. We will continue our efforts to ensure the Hired team is successful in their mission to responsibly and reliably match people to jobs they love."

As AI is more broadly adopted within the hiring and human resources industry, it's critical for every organization to focus on building fair, transparent, and accountable AI systems. Hired is leading the way for the rest of the industry.

About Fiddler Founded in October 2018, Fiddler's mission is to enable businesses of all sizes to unlock the AI BlackBox and deliver trustworthy AI experiences to end-users. Fiddler's next-generation Explainable AI Platform enables data science, product, and business users to explain, monitor, and analyze their AI solutions, providing transparent and reliable experiences to customers. Fiddler works with pioneering Fortune 500 companies as well as emerging tech companies. For more information please visitwww.fiddler.aior follow us on Twitter@fiddlerlabs.

About HiredHired (hired.com) is a marketplace that matches tech talent with the world's most innovative companies. Hired combines intelligent job matching with unbiased career counseling to help people find a job they love. Through Hired, job candidates and companies have transparency into salary offers, competing opportunities and job details. This level of insight is unmatched, making the recruiting process quicker and more efficient than ever before. Hired was founded in 2012 and is headquartered in San Francisco, with offices in the United States, Canada, France, and the UK. The company is backed by Lumia Capital, Sierra Ventures and other leading investors.

Media Contact: [emailprotected]

SOURCE Fiddler Labs

See the original post here:

Hired Partners with Fiddler to Pioneer Responsible AI for Hiring - PRNewswire

Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers – Fast Company

Sharing emotion-driven narratives that resonate with other people is something humans are quite good at. Weve been sitting around campfires telling stories for tens of thousands of years, and we still do it. One reason why is because it's an effective way to communicate: We remember stories.

But what makes for good storytelling? Mark Magellan, a writer and designer at Ideo U, puts it this way: "To tell a story that someone will remember, it helps to understand his or her needs. The art of storytelling requires creativity, critical-thinking skills, self-awareness, and empathy."

All those traits are fundamentally human, but as artificial intelligence (AI) becomes more commonplace, even experts whose jobs depend on them possessing those traitspeople like Magellanforesee it playing a bigger role in what they do.

Connecting with an audience has always been something of an art formit's part of the magic of a great storyteller. But AI is steadily converting it into a science. The AI-driven marketing platform Influential uses IBM's Watson to connect brands with audiences. It finds social media influencers who can help spread a brand's message to target demographics in a way that feels authentic and, well, human.

Ryan Detert, Influential's CEO and cofounder, says that the tool uses two of Watsons services, Personality Insights and AlchemyLanguage, to look at the content written by an influencer, analyzing that text, and scoring it across 52 personality traitslike "adventurousness," "achievement striving," and "openness to change." To date, says Detert, Influential has gathered these insights on 10,000 social media influencers with over 4 billion followers altogether.

Once a brand comes to Influential with their marketing goals, the platform uses Watson to identify the traits most strongly expressed by that brand, then matches influencers whose personalities, social media posts, and followers best reflect it. If a brand narrative wants to project adventurousness, Influential will find influencers who score highly on that characteristic and whose followers respond well to it.

Influential worked with Kia on a 2016 Super Bowl ad featuring Christopher Walken, and Detert notes, "We saw a 30% higher level of engagement on FTC posts, which are branded posts [flagged] with [a hashtag like] #Ad or #Sponsored. The more the brand and influencers' voices are aligned," he says, "the greater the engagement, sentiment, ad recall, virality, and clicks." The influencers that the AI technology pinpointed, says Detert, "outperformed their regular organic content with these branded posts." In other words, the machine learned how to connect with the influencers' fans even better than the influencers themselves did.

Influential's Watson-powered AI tool figured out how to get this Kia ad to resonate with influencers' followers more powerfully than those influencers' own posts did.

Influential also uses Watson's AI to analyze social buzz and tell brands how they're being perceived. Sometimes, says Detert, that means telling brands, "Youre not the brand you think you are," and going back to the drawing board to come up with a better story.

Somatic is a digital marketing company whose experiments with machine learning show the technology's potential in visually driven storytelling, too. One of its tools, called "Creative Storyteller," uses AI to scan photos and generate short text descriptions of what it seesbut not in generic prose.

The tool, says Somatic founder and CEO Jason Toy, can write about visual data in different styles or genres, even mimicking the prose styles of celebrities. As long as there's enough written content out there for Creative Storyteller to be trained on, Toy says it can do a pretty good impression.

Creative Storyteller has been used with major companies to turn an ordinary marketing campaign into an interactive one. In one case, says Toy, "We built an interactive ad where a user uploads a picture and a model talks to them in a style of someone else about that pic."

Such short-form stories work well, but longer text often fails because the AI lacks context, notes Toy. "These machines are able to learn the information you give them. It seems magical at first, but then cracks appear with longer text."

Google AI researcher Margaret Mitchell's work may eventually fill cracks like those. She hopes her research, which is geared toward "helping AI start to understand things about everyday human life," can start to push machines beyond just generating "literal content, like you get in image captioning," toward anticipating how those descriptions will make people feel.

Says Mitchell, "There is increasing interest in developing humanistic AI that can understand human behaviors and relations."

[Image: via Somatic]

Now for the inevitable question: Will this "humanistic AI" ever beat humans at their own game? Suzanne Gibbs Howard, a partner at Ideo and founder of Ideo U, believes collaboration between human storytellers and machines is more likely in the near term. Some of the questions she's considering include, "How might the worlds storytellers leverage knowledge and insights via AI to make their stories even more powerful, faster? Might AI be a prototyping tool?"

Magellan, Gibbs Howard's colleague at Ideo U, believes the answer is yes; AI as already shown its ability to "explore unmet or latent needs" in an audience that a human storyteller might miss. That could prove helpful for planning and refining a story. "It's not hard to imagine AI crowdsourcing story plots from the internet and identifying people's needs from social media," he muses.

Jason Toy also sees collaboration with AI as the model to strive toward. "I see them as systems that work with humans. They'll always need the human as high-level architect. Storytellers need to think about how the story will be felt, told, and the medium."

"It's all about practicing empathy," stresses Magellan. And for all the strides in AI research that he's seen, empathy just doesn't appear to be a skill machines will pick up too soon. "Theres a level of emotional intelligence you must possess as a storyteller," he says. "Until robots gain that, weve got a leg up on them!"

In fact, storytelling may be one way to future-proof your job. Spend some more time around the campfire, but dont be afraid if a robot turns up to help.

Darren Menabney lives in Tokyo, where he leads global employee engagement at Ricoh, teaches MBA students at GLOBIS University, coaches online for Ideo U, and supports the Japanese startup scene. Follow him on Twitter at @darmenab.

Originally posted here:

Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company

Schools Are Installing Bathroom Surveillance Systems to Bust Vapers

AI Spies

The Food and Drug Administration has made a big push in recent months to crack down on the marketing of e-cigarettes to minors. It’s sent hundreds of warning letters to manufacturers, fined a handful, and even raided the headquarters of popular vape-maker Juul.

Now, schools across the U.S. and Canada are taking a different approach by using AI-powered surveillance devices to catch students vaping — where else? — in bathrooms.

Potty Mouth

According to a new story in IEEE Spectrum, more than 200 schools in the U.S. and Canada are currently using Fly Sense, an AI-powered vaping detection system. Since most underage school vaping takes place in bathrooms — where the schools aren’t legally allowed to install cameras — Fly Sense detects vaping using other types of sensors.

Fly Sense’s sensors detect the chemical signatures of vaping and send real-time text or email alerts to school officials. The school can program the vaping detection system to send these alerts to different staff members depending on the time of day.

Flawed but Effective

Fly Sense maker Soter Technologies claims the system is between 70 and 80 percent accurate at detecting vaping — no word on how the people receiving the alerts feel about every one in five being a false alarm.

According to Soter, even if it’s not 100 percent accurate, Fly Sense might scare students vape-free — locations with the system installed see an average decrease in vaping of 70 percent.

So, while installing any surveillance device in a bathroom isn’t exactly ideal, until we can find a way to keep e-cigarettes out of the hands of minors, Fly Sense can at least help us keep students from vaping when they’re supposed to be learning.

READ MORE: Schools Enlist AI to Detect Vaping and Bullies in Bathrooms [IEEE Spectrum]

More on vaping: The FDA Just Raided the Headquarters of E-Cigarette Maker Juul

See more here:

Schools Are Installing Bathroom Surveillance Systems to Bust Vapers

Salesforce brings Einstein AI to Service Cloud – ZDNet

Einstein Case Management. Via Salesforce.

Salesforce is giving its Service Cloud customer service platform the Einstein AI treatment.

The CRM giant on Monday rolled out Service Cloud Einstein, a souped up version of its customer service cloud designed to make work more intuitive for customer service agents and their managers. Of course, Einstein is Salesforce's artificial intelligence effort released during the company's Dreamforce conference last year.

For the customer service agent, Service Cloud Einstein helps optimize how calls are routed via a new feature called Einstein Case Management. Using machine learning, cases are automatically escalated and classified as they come in, and relevant information for case resolution is automatically surfaced. It also ensures that high priority cases are pushed through more quickly and to the best equipped agent.

The platform also works to automate much of the initial information gathering, mostly via chatbots, so agents are prepared with background information before they interact with the customer.

The second component of Service Cloud Einstein is for the customer service supervisor. Aptly named Einstein Supervisor, the platform uses a mix of AI-powered analytics (from the Analytics Cloud's Service Wave analytics app), real-time insights and smart data discovery to give managers a better sense of agent availability, queues and wait times.

"Customers today expect and demand great service experiences," said Adam Blitzer, EVP and GM for Service and Sales Clouds at Salesforce. "Service Cloud Einstein empowers companies to transform any customer service interaction into a smart conversation that drives brand loyalty and creates customers for life."

Einstein Supervisor is generally available today, but Einstein Case Management won't be available until a pilot period later this year.

SEE ALSO:

See the article here:

Salesforce brings Einstein AI to Service Cloud - ZDNet

Virtual Santa Uses Real AI to Talk to Children Around the World – GlobeNewswire

Consulting the Naughty or Nice List on AskSanta.com, the free, fully interactive virtual Santa powered by StoryFile AI. Available on any browser now.

Los Angeles, CA, Dec. 10, 2020 (GLOBE NEWSWIRE) -- The COVID-19 Pandemic was going to prevent many children from speaking with Santa Claus this year. But, AI startup StoryFile had other plans Christmas would not be cancelled. Today StoryFileannounced that AskSanta.com, the world's first artificially intelligent, virtual Santa Claus, has already been visited by children from over 170 countries. Today, the site allows children of all ages to interact with Santa in real time, for free, and with no time limits. Heather Smith, Co-Founder & CEO of StoryFile, developed the idea to use her companys conversational video technology to usher in the holiday spirit. StoryFile created AI powered Ask Santa as a gift to children everywhere.

While there are many virtual Santa experiences this year, Ask Santa is the only one that is free, interactive, and answers questions in real time. While some custom video experiences can cost as much as $58 for 8 minutes, Ask Santa has no time limit, or limit on the number of sessions per child or family. The idea was to create an experience for people to speak with Santa from the comfort and safety of their homesand have the opportunity to ask Santa (almost) anything! Ask Santa is even being visited by large groups using big screen TVs or monitors. Ask Santa is a webpage app that works on any browser on smartphones, tablets, desktops etc.

By far the most frequent thing kids bring up with Ask Santa is their concern for his safety during the pandemic. We are seeing through the interactions just how concerned children are and they are telling Santa they are hungry, or sharing that a relative passed away. It is heartbreaking at times, but we are comforted by the fact that Santa is offering an empathetic and comforting outlet, said StoryFile CEO Heather Smith. Ask Santa answers these questions and shows the spirit of the holiday is about compassion and community more than ever--not just gifts. That said, many children still ask Christmas specific questions, and Am I on the naughty or nice list? is still a top question!

What makes Ask Santa stand above the entire Santa business sector, is that when children ask Santa a question, they receive direct and immediate answers and responses. StoryFiles technology means that children can have face-to-face conversations that are very much needed in todays socially and physically distanced world. In addition, children are writing in letters to Santa and receiving responses from Santa and his team within 24 hours.

Top questions and subjects asked of Santa include:

When asking about COVID-19 specifically, Santa answers their questions by:

2020 has been really hard for so many people, but Christmas will still be awesome if we remember what it's all about, our Santa, also known as Santa Cortney, told ABC News. Thats why Im so happy that AskSanta.com is free and that children can talk to me for as long as they want. Ho Ho Ho!

In a partnership with the American Heart Association (AHA) and the Red Sled Foundation, Ask Santa is encouraging anyone that is able to make a holiday donation to help kids in need. 100% of donations made on the Ask Santa website will go directly to the AHA, inspiring kids and adults alike to be heart heroes this holiday season.

For more information or media/PR requests for AI Santa and StoryFiles Heather Smith, please contact alana@storyfile.com

About StoryFile:StoryFile is rapidly evolving the media and storytelling landscape through its proprietaryinnovative technology, creating new ways to interact and communicate with each other and the world through user-led, voice activated interactive conversational video technology. StoryFiles mobile native cloud-based automatic AI-driven interactive conversational video platform creates an individualized and curated historical and living narrative. StoryFile leverages AI to enhance a natural conversation with the video captured on any device. StoryFile offers in-studio and remote legacy capture experiences, a beta-version of the StoryFile App on the app store, and [soon] a StoryFile Life version for users to create their personal full length StoryFiles using their own devices. To learn more, visit http://www.StoryFile.com.

About the American Heart Association:Children are the future in a world where cardiovascular diseases claim more lives each year than all forms of cancer combined. The American Heart Association is a relentless force dedicated to saving and improving every childs life. Its vision is that all children, regardless of gender, race, location or economic status, should be able grow to their full potential. The American Heart Association is laser-focused on enabling our children to build a world free from heart disease and stroke, working to improve environments where kids live, learn and play, and arming them with information, advocating for healthy environments and encouraging healthy habits. To learn more, visit http://www.heart.org.

About Red Sled Santa Foundation: For many years, the Red Sled Santa Foundation has made the holidays special for children. The Foundation provides programs, services, fun and educational holiday gifts, and essential needs for low income, special needs, medically challenged and terminally ill children so they experience a memorable and meaningful loving holiday. The Foundation accomplishes this mission through fundraising efforts, gifts from generous donors, and support from local merchants for our annual Fill the Sleigh Toy Drive. To learn more, visit http://www.redsledfoundation.org.

Go here to see the original:

Virtual Santa Uses Real AI to Talk to Children Around the World - GlobeNewswire

Zencity raises $13.5 million to help cities aggregate community feedback with AI and big data – VentureBeat

Zencity, a platform that meshes AI with big data to give municipalities insights and aggregated feedback from local communities, has raised $13.5 million from a slew of notable backers, including lead investor TLV Partners, Microsofts VC arm M12, and Salesforce Ventures. Founded in 2015, Israel-based Zencity had previously raised around $8 million, including a $6 million tranche nearly two years ago. With its latest cash injection, the company will build out new strategic partnerships and expand its market presence.

Gathering data through traditional means, such as surveys or Town Hall meetings, can be a slow and time-consuming process and fails to factor in evolving sentiment. Zencity enables local governments and city planners to extract meaningful data from a range of unstructured data sources including social networks, news websites, and even telephone hotlines to figure out what topics and concerns are on local residents minds all in real time.

Zencity uses AI to sort and classify data from across channels to identify key topics and trends, from opinions on proposed traffic measures to complaints about sidewalk maintenance or pretty much anything else that impacts a community.

Above: Zencity platform in use

Zencity said it has seen an increase in demand during the pandemic, with 90% of its clients engaging on a weekly basis, and even on weekends.

Since COVID-19, not only have we seen an increase in usage but in demand as well, cofounder and CEO Eyal Feder-Levy told VentureBeat. Zencity has signed over 40 new local governments, reaffirming our role in supporting local governments crisis management and response efforts.

Among these new partnerships are government agencies in Austin, Texas; Long Beach, California; and Oak Ridge, Tennessee. A number of municipalities also launched COVID-19 programs using the Zencity platform, including the city of Meriden in Connecticut, which used Zencity data to optimize communications around social distancing in local parks. Officials discovered negative sentiment around the use of drones to monitor crowds in parks and noticed that communications from the mayors official channels got the most engagement from residents.

Elsewhere, government officials in Fontana, California used Zencity to assess locals opinions on lockdown restrictions and regulations.

Before the COVID-19 pandemic hit, providing real-time resident feedback for local governments was core to Zencitys AI-based solution, Feder-Levy continued. And now, as local governments continue to battle the pandemic and undertake the task of economic recovery, Zencitys platform has proven pivotal in their crisis response and management efforts.

Go here to read the rest:

Zencity raises $13.5 million to help cities aggregate community feedback with AI and big data - VentureBeat

Artificial intelligence reduces the user experience, and that’s a good thing – ZDNet

When it comes to designing user experiences with our systems, the less, the better. We're overwhelmed, to put it mildly, with demands and stimuli. There are millions of apps, applications and websites begging for our attention, and once we have a particular app, application and website up, we still are bombarded by links and choices. Every day, every hour, every minute, it's a firehose.

AI winnows a firehose of choices down to a gently flowing fountain

Artificial intelligence is offering relief on this front. User experience, driven by AI, may help winnow down a firehose of choices and information needed at the moment down to a gently flowing fountain. And application and systems designers are sitting up and taking notice.

That's the word from Jol van Bodegraven, product designer at Adyen, along with other UX design experts, authors of a series of ebooks that delve into how AI will impact UX design, and how to design meaningful experiences in an era with AI-driven products and services. "Surrounded by misconceptions and questions regarding its purpose and power, apart from its known ethical and philosophical challenges, AI can be the catalyst for great user experiences," he observes.

In the first work of the series, Bodegraven, along with Chris Duffey, head of AI strategy and innovation at Adobe, introduces how AI affects design processes and the importance of data in delivering meaningful user experiences. For example, AI can "function as an assistant," helping with research, collecting data or more creative tasks. AI also serves as a curator, absorbing data "to determine the best personal experience per individual." AI can help design systems, as it is adept at "uncovering patterns and creating new ones. More and more companies are trusting AI to take care of their design systems to keep them more consistent for users."

With this in mind, the authors make the following recommendations for making the most of AI in designing and delivering a superior UX:

Design for minimal input, maximum outcome. "We get bombarded with notifications, stimuli, and expectations which we all need to manage somehow," Bodegraven and Duffey state. "AI can solve this problem by doing the legwork for us. Think of delimited tasks which can be easily outsourced. Challenge yourself to solve significant user problems with minimal input expected from them."

Design for trust. "it is important that we design for trust by being transparent in what we know about the user and how we're going to use it. If possible, users should be in control and able to modify their data if needed."

Humanize experiences. "Looking at recent findings from Google who studied how people interacted with Google Home, one thing stood out. Users were interacting with it as if it were human. Users said for example 'thanks' or 'sorry' after a voice-command. People can relate more to devices if they have a character. "

Design for less choice.That's right, reduce user choices. "The current high performing, and overly noisy world leaves very little room for users to be in the moment," Bodegraven and Duffey state. "Design for less choice by removing unnecessary decisions. This creates headspace for users and can even result in the appearance of things we hadn't thought of."

The quality of UX will make or break the success of an application or system, regardless of how many advanced features and functions are built within. Simplicity is the path to success when it comes to application design, and AI can bring about that simpicity.

Read more:

Artificial intelligence reduces the user experience, and that's a good thing - ZDNet

Patra and expert.ai Announce Strategic Partnership to Improve Policy Review and Bring Advanced AI-based Natural Language Understanding Solutions to…

Patra and expert.ai are bringing a proven leader in AI to the insurance industry to solve real-world challenges.

Ensuring accurate language understanding at speed and scale,expert.ai enables global organizations to leverage its mature and proven AI-based natural language (NL) platform to automate the reading, understanding, and extraction of meaningful data from structured and unstructured text to augment and expand insights for every process that involves language. By integrating expert.ai's cutting edge AI capabilities, Patra improves quality, reduces friction, and drives out inefficiencies in the process of manually reviewing and cross-validating dozens to hundreds of pages of text for any given policy. These capabilities facilitate a deeper understanding of data, enabling previously out-of-reach insights due to the vast and complex nature of language semantics.

In working together, both companies are satisfying the growing demands in the insurance industry of leveraging advanced natural language and ML capabilities to address challenges in policy checking risk exposure. With close to 80% of the information within the insurance industry being unstructured data, intelligent automation based on human-like understanding is a critical factor for competitive advantage, as it increases capacity, reducing inefficiencies and high-risk vulnerabilities. By applying the power of artificial intelligence to policy checking, Patra is providing agencies, wholesalers, MGAs, and carriers a better understanding of their book of business and helping them understand pricing behaviors and coverage dynamics by risk appetite. These capabilities will unleash a new generation of opportunities, including proactive notifications versus reactive discoveries.

"With expert.ai, Patra is unlocking the ability for clients to be alerted of policy inaccuracies, reduce E&O exposures, drive cost savings, create additional value for our services, and push the limits of today's technology," said John Simpson, CEO and Founder of Patra. "Policy Checking has been one of the insurance industry's biggest challenges for decades. Now, with expert.ai and the formation of the InsureConneXtions Alliance, Patra has brought to market a proven leader in artificial intelligence, in addition to partnering with innovators in insurance industry to solve challenges that apply to every policy issued. Policy Checking is just the first of many services we are addressing."

"We're honored to join forces with Patra, an innovation leader in insurance services, in delivering the next generation of AI technology for policy checking and review.And we see this as just the first step in working together to power language understanding in any application or process across the insurance value chain," said Walt Mayo, CEO of expert.ai."The combination of expert.ai's long history of industry-best AI natural language understanding, and Patra's deep process expertise and customer focus creates an incredibly strong foundation for addressing real-world challenges in the insurance industry."

About PatraPatra is a leading provider of technology-enabled services to the insurance industry. Patra's global experts' team allows brokers, MGAs, wholesalers, and carriers to capture the Patra Advantage profitable growth and organizational value. Patra powers insurance processes by optimizing the application of people and technology, supporting insurance organizations as they sell, deliver, and manage policies and customers. Patra is also a founding member of the InsurConneXtions Alliance,representing leaders across insurance technology, brokerage, wholesale, and specialty insurance, representing over $50 Billion in Insurance premiums.

For more information, visitpatracorp.comor follow us @Patracorp onTwitterandLinkedIn.

About expert.aiExpert.ai is the premier artificial intelligence platform for language understanding. Its unique hybrid approach to NL combines symbolic human-like comprehension and machine learning to transform language-intensive processes into practical knowledge, providing the insight required to improve decision making throughout organizations. By offering a full range of on-premise, private, and public cloud offerings, expert.ai augments business operations, accelerates, and scales data science capabilities, and simplifies AI adoption across a vast range of industries, including Insurance, Banking & Finance, Publishing & Media, Defense & Intelligence, Life Science & Pharma, Oil Gas & Energy, and more. The expert.ai brand is owned by Expert System (EXSY:MIL), that has cemented itself at the forefront of natural language solutions, and serves global businesses such as AXA XL, Zurich Insurance Group, Generali, Bloomberg INDG, BNP Paribas, Rabobank, Dow Jones, Gannett, and EBSCO.

For more information visitwww.expert.aiand follow us onTwitterandLinkedIn.

SOURCE Patra Corporation; expert.ai

http://www.patracorp.com

Read more:

Patra and expert.ai Announce Strategic Partnership to Improve Policy Review and Bring Advanced AI-based Natural Language Understanding Solutions to...

Lockheed testing artificial intelligence to fight wildfires – FOX 31 Denver

JEFFERSON COUNTY, Colo. (KDVR) A global defense company shared details with FOX31 about the use of cutting-edge technology used in battlegrounds around the world to help fight wildfires.

Recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action, said Dan Lordan, senior manager for AI integration at Lockheed Martin Artificial Intelligence Center.

All those words describe artificial intelligence. Lockheed Martin, whose space division is based out of Jefferson County, wants to use AI to help gather critical details during a wildland fire.

Lordan says it starts with mapping out a wildfire. It can take hours to determine the size, shape, location and areas emitting the most heat.

With AI, the promise is we can cut that down to minutes, said Lordan.

Lockheed has teamed up with tech company NVIDIA to help create maps and models. Together they use variables like wind, humidity, vegetation and topography to not only determine what the fire is doing but also what it will do next.

Currently, Lordan said predicting a fires rate of spread and direction can take up to a day.

The promise is you can break that down to hours, said Lordan.

This means the ability to give command teams critical information and recommendations will decrease response time and make better decisions about fire suppression. This can range from digging trenches to performing back burns and using aerial suppression activity.

The same time reducing promises will apply to updating data and maps on the areas most prone to fires down to days instead of years.

The application of this technology to wildfires is already taking place.

Currently, we are flight testing prototype software with the Colorado Division of Fire Prevention and Control, said Lordan. We are very excited about the progress we are making there.

Lockheed is also working to build a wildfire research lab where private and government groups can work to collaborate on bringing new technologies to help prevent and fight wildfires.

Visit link:

Lockheed testing artificial intelligence to fight wildfires - FOX 31 Denver