Google crams machine learning into smartwatches in AI push – CIO

Google is bringing artificial intelligence to a whole new set of devices, including Android Wear 2.0 smartwatches and the Raspberry Pi board, later this year.

A cool thing is these devices don't require a set of powerful CPUs and GPUs to carry out machine learning tasks.

Google researchers are instead trying to lighten the hardware load to carry out basic AI tasks, as exhibited by last week's release of Android Wear 2.0 operating system for wearables.

[ Bots may send your liability risk soaring ]

Google has added some basic AI features to smartwatches with Android Wear 2.0, and those features can work within the limited memory and CPU constraints of wearables.

Android Wear 2.0 has a "smart reply" feature, which provides basic responses to conversations. It works much like how predictive dictionaries work, but it can auto-reply to messages based on the context of the conversation.

Google uses a new way to analyze data on the fly without bogging down a smartwatch. In conventional machine-learning models, a lot of data needs to be classified and labeled to provide accurate answers. Instead, Android Wear 2.0 uses a "semi-supervised" learning technique to provide approximate answers.

"We're quite surprised and excited about how well it works even on Android wearable devices with very limited computation and memory resources," Sujith Ravi, staff research scientist at Google said in a blog entry.

For example, the skimmed down machine-learning model can classify a few words -- based on sentiment and other clues -- and create an answer. The machine-learning model introduces a streaming algorithm to process data, and it provides trained responses that also factor in previous interactions, word relationships, and vector analysis.

The process is faster because the data is analyzed and compared based on bit arrays, or in the form of 1s and 0s. That helps analyze data on the fly, which tremendously reduces the memory footprint. It doesn't go through the conventional process of referring to rich vocabulary models, which require a lot of hardware. The AI feature is not intended for sophisticated answers or analysis of a large set of complex words.

The feature can be used with third-party message apps, the researchers noted. It is loosely based on the same smart-reply technology in Google's messaging Allo app, which is built from the company's Expander set of semi-supervised learning tools.

The Android Wear team originally reached out to Google's researchers and expressed an interested in implementing the "smart reply" technology directly in smart devices, Ravi said.

AI is becoming pervasive in smartphones, PCs, and electronics like Amazon's Echo Dot, but it largely relies on machine learning taking place in the cloud. Machine-learning models in the cloud are trained, a process called learning, to recognize images or speech. Conventional machine learning relies on algorithms, superfast hardware, and a huge amount of data for more accurate answers.

Google's technology is different than Qualcomm's rough implementation of machine learning in mobile devices, which hooks up algorithms with digital signal processors (DSPs) for image recognition or natural language processing. Qualcomm has tuned DSPs in its upcoming Snapdragon 835 to process speech or images at higher speeds, so AI tasks are carried out faster.

Google has an ambitious plan to apply machine learning through its entire business. The Google Assistant -- which is also in Android Wear 2.0 -- is a visible AI across smartphones, TVs, and other consumer devices. The search company has TensorFlow, an open-source machine-learning framework, and has its own inferencing chip called Tensor Processing Unit.

Originally posted here:

Google crams machine learning into smartwatches in AI push - CIO

Researchers have figured out how to fake news video with AI – Quartz – Quartz

If you thought the rampant spread of text-based fake news was as bad as it could get, think again. Generating fake news videos that are undistinguishable from real ones is growing easier by the day.

A team of computer scientists at the University of Washington have used artificial intelligence to render visually convincing videos of Barack Obama saying things hes said before, but in a totally new context.

In a paper published this month, the researchers explained their methodology: Using a neural network trained on 17 hours of footage of the former US presidents weekly addresses, they were able to generate mouth shapes from arbitrary audio clips of Obamas voice. The shapes were then textured to photorealistic quality and overlaid onto Obamas face in a different target video. Finally, the researchers retimed the target video to move Obamas body naturally to the rhythm of the new audio track.

This isnt the first study to demonstrate the modification of a talking head in a video. As Quartzs Dave Gershgorn previously reported, in June of last year, Stanford researchers published a similar methodology for altering a persons pre-recorded facial expressions in real-time to mimic the expressions of another person making faces into a webcam. The new study, however, adds the ability to synthesize video directly from audio, effectively generating a higher dimension from a lower one.

In their paper, the researchers pointed to several practical applications of being able to generate high quality video from audio, including helping hearing-impaired people lip-read audio during a phone call or creating realistic digital characters in the film and gaming industries. But the more disturbing consequence of such a technology is its potential to proliferate video-based fake news. Though the researchers used only real audio for the study, they were able to skip and reorder Obamas sentences seamlessly and even use audio from an Obama impersonator to achieve near-perfect results. The rapid advancement of voice-synthesis software also provides easy, off-the-shelf solutions for compelling, falsified audio.

There is some good news. Right now, the effectiveness of this video synthesis technique is limited by the amount and quality of footage available for a given person. Currently, the paper noted, the AI algorithms require at least several hours of footage and cannot handle certain edge cases, like facial profiles. The researchers chose Obama as their first case study because his weekly addresses provide an abundance of publicly available high-definition footage of him looking directly at the camera and adopting a consistent tone of voice. Synthesizing videos of other public figures that dont fulfill those conditions would be more challenging and require further technological advancement. This buys time for technologies that detect fake video to develop in parallel. As The Economist reported earlier this month, one solution could be to demand that recordings come with their metadata, which show when, where and how they were captured. Knowing such things makes it possible to eliminate a photograph as a fake on the basis, for example, of a mismatch with known local conditions at the time.

But as the doors for new forms of fake media continue to fling open, it will ultimately be left to consumers to tread carefully.

Read more from the original source:

Researchers have figured out how to fake news video with AI - Quartz - Quartz

Exclusive: Eshoo On AI, Cybersecurity And Kicking America Off The China Drug Habit – Forbes

WASHINGTON, DC - Chairman Rep. Anna Eshoo (D-Calif.) is seen during a House Energy and Commerce ... [+] Subcommittee on Health hearing to discuss protecting scientific integrity in response to the coronavirus outbreak on Thursday, May 14, 2020. in Washington, DC. (Photo by Greg Nash-Pool/Getty Images)

Congresswoman Anna Eshoo (CA-18) was first elected to Congress in 1992. She has served on the Energy and Commerce Committee since 1995 with a focus on health and technology. Last year she became the first woman ever to serve as Chair of the Health Subcommittee. She has authored 41 bills signed into law by four presidents. I was able to speak to Congresswoman Eshoo about her recent accomplishment and her agenda for AI, cybersecurity, and medical supply chains.

RL: Representative Eshoo, thank you for your bipartisan leadership to create a national strategy to end dependence of foreign manufacturing of lifesaving drugs. What is the status of this bill? What can your experience from pharmaceuticals supply chain security teach us about other critical areas for supply chain security, such as information technology?

Ive championed the need to address our nations overreliance on the foreign production of critical drugs in Congress. Last September, I co-authored a Washington Post Op-Ed about our dangerous and troubling reliance on China for the manufacturing of drugs and their ingredients. Soon after, I held a hearing in my Health Subcommittee about the consequences and complications of our global drug supply chain. On May 1st I introduced bipartisan legislation, the Prescription for American Drug Independence Act, which requires the National Academies of Sciences, Engineering, and Medicine to convene a committee of experts to analyze the impact of U.S. dependence on the manufacturing of lifesaving drugs and make recommendations to Congress within 90 days to ensure the U.S. has a diverse drug supply chain to adequately protect our country from natural or hostile occurrences. The legislation was included in the House-passed Heroes Act and I look forward to the Senate taking it up.

You are correct to note that an overreliance on China is not unique to the drug supply chain. For a decade Ive raised how the vulnerabilities in our telecommunications infrastructure directly impact our national security. On November 2, 2010, I wrote to the FCC expressing grave concerns about Huawei and ZTE, which have opaque entanglements with the Chinese government. Sadly, in the intervening decade Huawei and ZTE equipment has proliferated across our country because its cheap, due to the Chinese government subsidizing them. Weve passed several important measures this Congress that Im proud to support, including measures to create a mechanism for the federal government to exclude Huawei and ZTE equipment from our networks and to establish a program to rip and replace existing equipment made by the companies.

RL: It was great to see bipartisan and bicameral support for the National AI Research Resource Task Force Act under your leadership These are much-needed policies measures. What else do we need to do on this front? What is are your objectives in this area for the next Congress?

Im very proud of the smart tech-related provisions in the House-passed H.R. 6395, the William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, or the NDAA.

The Global Al Index indicates that the U.S. is ahead of China in the global AI race today but experts predict China will overtake the U.S. in just five to 10 years. Im pleased that the NDAA includes several important AI efforts, including my bipartisan and bicameral legislation, H.R. 7096, the National AI Research Resource Task Force Act, which establishes a task force to develop a roadmap for a national AI research cloud to make available high-powered computing, large data sets, and educational resources necessary for AI research.

You ask what else is needed in addition to these provisions. In AI, the answer is federal R&D funding. Earlier this year, I wrote to the House Appropriations Committee urging them to allocate robust funding for nondefense AI R&D, and seventeen of my House colleagues joined my letter. This funding is an important investment in our countrys future and must be a priority.

On cybersecurity, Im pleased the NDAA included a number of recommendations from the Cyberspace Solarium Commission which Congress established in last years NDAA. Cybersecurity must be a top priority for every company and for government. It is a domain that works best when companies, researchers, and government work hand-in-hand. Unfortunately, cybersecurity efforts operate in silos across the private sector and within government. We need coordination. Its for this reason I cosponsored legislation to establish a centralized cybersecurity coordinator the National Cyber Director in the White House.

A gap I see is the cybersecurity of what I call small organizations small businesses, nonprofits and local governments that are too small to ever employ a cybersecurity professional and may never have the budget to pay for security services. While 50-page technical and legalistic government documents are critical for cybersecurity teams within large organizations, they are too dense for small business owners, executive directors of nonprofits, and city managers of small municipalities. Im currently drafting legislation to address this issue that should be ready to introduce shortly.

I was also pleased the House adopted an amendment I cosponsored that is based on the CHIPS for America Act, which will restore American leadership in semiconductor manufacturing. In the House, I represent much of Silicon Valley, a region that gets its name from the material used to make semiconductors. While the technology sector has evolved to include much more than semiconductor manufacturing, it remains the foundation of one of the most vibrant parts of our economy. Our militarys dependence on semiconductor manufacturing is why its a national security priority, and Im hopeful that the CHIPS for America Act will be enacted into law as soon as possible.

RL: In my own research, I have uncovered that California state government itself has set up purchasing agreements with Chinese-government owned firms like Lenovo, Lexmark, and others. As you well know, the Chinese government asserts its right to collect any data on any Chinese-made device anywhere for any reason. China has been building a database on Americans since 2015. Having Chinese owned equipment in state government is a risk particularly around elections. In any event, it appears that these contracts have been set up by procurement officers who are not aware of the security risks.I attribute this to the lack of communication between the federal government and the states themselves. How could Congress engage constructively with states to help them improve their privacy and security practices in this regard?

You raise a number of highly important points. When it comes to evolving technologies, thinking about privacy and security is critical at every step of policymaking and at every level of government. Laws and regulations need to require privacy and security. Vendor selection should always consider privacy and cybersecurity, especially when issues intersect with national security. And governmental oversight needs to review privacy and security issues.

The federal government must share threat and vulnerability information more reliably. We cant expect every procurement manager in every municipal government to be aware of the national security concerns related to routers, modems, printers, and myriad other internet-connected devices and electronics. National security is the domain of the federal government. In addition to protecting individual Americans, the federal governments responsibility includes protecting our governmental (federal, state, and local) and our economic interests.

RL: Thank you, Congresswoman Eshoo.

See original here:

Exclusive: Eshoo On AI, Cybersecurity And Kicking America Off The China Drug Habit - Forbes

AR, VR, Autonomy, Automation, Healthcare: Whats Hot In AI Right Now – Forbes

A city at night

AI is in the social network you chat on, the engine you search with, the word processor you write with, and the camera you take pictures with. But whats growing fastest in artificial intelligence?

One clue is where the Fortune 50 are placing their AI bets.

And one big tell is what they need training data for.

Training data is really the basis for AI, Wendy Gonzalez, the president and CEO of Samasource told me in a recent TechFirst podcast. At the end of the day, machines need to learn how to speak, see, and hear. And they do so much like a human learns how to speak, see, and hear.

Samasource creates training data the labeled, structured data that teaches a machine or a computer how to do these things for a quarter of the Fortune 50, including top global tech giants like Google and Microsoft. Walmart and GE are customers, as is Nvidia, which makes AI chips that power much of the worlds artificial intelligence. So are automotive giants like Volkswagen and Ford.

That training ranges from as simple as this shape is a car to this is a Louis Vuitton Deauville Mini handbag. But its vital for multiple fields.

So what are the hottest areas that Samasource is getting training data requests for?

We see a lot of growth in AR/VR, Gonzalez says. And this could really include everything, its everything from faces, shirts, shoes, you name it, furniture ... were also seeing a lot of really interesting growth ... in e-commerce. So a lot of things I would describe as visual search: how do you actually look up something and detect whether its a plaid shirt as an example.

Delivery robots also need to know what a sidewalk is, what people and pets look like, how to navigate getting down off a sidewalk and onto a road, and what grass, trees, and bushes look like. Autonomous vehicles need to not just know what a road looks like, and what white or yellow painted lines mean, but also how to recognize a parking space, an upside-down car that might have been involved in an accident ... and all of it in various moderate to extreme weather conditions.

And they have to be able to recognize those objects both with visible light and LIDAR or radar.

The challenge for most AI training data is edge cases, Gonzalez says.

Imagine if you had a representation of hundreds of thousands of vehicles, but only like 10 motorcycles, she told me. Then youve got immediately kind of an inherent bias and so you have to really worry about not just to get the quality data, but do you have the right and most comprehensive representative data.

Interestingly, Samasource isnt just working on the AI projects that youd think. Self-driving cars and delivery robots are fairly obvious applications, after all.

But Gonzalez says AI is getting pervasive across many more domains.

Weve worked on everything from sustainable fishing, to reducing elephant poaching, to financial services classification, Gonzalez says. We definitely see a lot in healthcare. Theres an incredible amount that can be done in the healthcare and life sciences.

Get a full transcript of our conversation here.

Visit link:

AR, VR, Autonomy, Automation, Healthcare: Whats Hot In AI Right Now - Forbes

More funding for AI cybersecurity: Darktrace raises $75M at an $825M valuation – TechCrunch

With cybercrime projected to reap some $6 trillion in damages by 2021, and businesses likely to invest around $1 trillionover the next five years to try to mitigate that, were seeing a rise of startups that are building innovative ways to combat malicious hackers.

In the latest development, Darktrace a cybersecurity firm that uses machine learning to detect and stop attacks has raised $75 million, giving the startup a post-money valuation of $825 million, on the back of a strong business: the company said it has a total contract value of $200 million, 3,000 global customers and has grown 140 percent in the last year.

The funding will be used to expand the companys business operations into more markets. Notably, Darktrace also separately announced today that it is now in a strategic partnership with Hong Kong-based CITIC Telecom CPC, a telecoms firm serving China and other parts of Asia, to bring next-generation cyber defense to businesses across Asia Pacific.

We have confirmed that CITIC, which owns the strategic partner, is not investing as part of this partnership. CITIC CPC is not an investor, a spokesperson for Darktrace confirmed. It was aDarktracecustomer and, impressed by the fundamental power of the AI technology, decided to enter into a strategic partnership to expand its reach. Other telcos that work with Darktrace includeBT in the UK and AustraliasTelstra.

This latest round, a Series D, was led byInsight Venture Partners, with existing investors Summit Partners, KKR and TenEleven Ventures also participating. Darktrace which is also backed by Autonomys Mike Lynch was founded in the UK and now is co-based in Cambridge and San Francisco.This round of funding brings the total raised by Darktrace to just under $180 million.

IT security has been around for as long as we have even had a concept of IT, but a wave of new threats such as polymorphic malware that changes profile as it attacks plus the ubiquity of networked and cloud-based services, has rendered many of the legacy antivirus and other systems obsolete, simply unable to cope with whats being thrown at organisations and the individuals that are a part of them.

Darktrace is part of the new guard of firms that are built around the concept of using artificial intelligence both to help security specialists identify and stop malicious attacks, as well as act on their own to automatically detect and stop the threats.

Other security startups built on using AI include Hexadite acquired by Microsoft for around $100 million last month which, like Darktrace, works in the area of remediation by both identifying and relaying information about attacks to specialists, as well as stopping some itself;Crowdstrike, which raised a large round of funding in May at a billion-dollar valuation;Cylance, also valued at more than $1 billion;Harvest AI, which Amazon quietly acquired last year; and Illumio, a provider of segmented security solutions that raised $125 million earlier this year.

Darktraces system is based on an appliance it calls theEnterprise Immune System that, as we have noted before, sits on a companys network and listens to whats going on. The immune system in its name is a reference to the immune system of humans, which (when healthy) develops immunity to viruses by being exposed to them in small doses. Darktraces system is designed to identify malicious activity in a network. It alerts IT managers when there is suspicious behavior. It is also designed to take immediate action to stop or at least slow down an attack until more help is at hand.

That has proven to be an attractive idea to investors, as seen by the hundreds of millions that have been ploughed into this area already.

Insight Venture Partners has a proven record of partnering with tech-focused firms, and its backing of Darktrace is another strong validation of the fundamental and differentiated technology that the Enterprise Immune System represents, said Nicole Eagan, CEO at Darktrace, in a statement. It marks another critical milestone for the company as we experience unprecedented growth in the U.S. market and are rapidly expanding across Latin America and Asia Pacific in particular, as organizations are increasingly turning to our AI approach to enhance their resilience to cyber-attackers.

In just four years, Darktrace has established itself as a world leader in AI-powered security, said Jeff Horing, Managing Director at Insight Venture Partners. Insight is proud to partner with Darktrace to continue to drive its strong growth and superior product market fit.

Its interesting to see Darktrace moving into China: the country has been identified numerous times as one of the main origination points of cyberattacks on Western firms, but what doesnt get reported much is that enterprises in China are also subject to the same problems.

CITIC Telecom CPC said that Asia Pacific businesses are battling fierce attacks on a daily basis.

As we have seen from the headlines, humans are consistently outpaced by increasingly automated threats, organizations increasingly recognize that traditional defenses focussed on past threats only provide the most essential protection,said Daniel Kwong, Senior Vice President, Information Technology and Security Services at CITIC Telecom CPC. Companies in Asia Pacific need a new approach to remain resilient in the face of brazen, never-seen-before advanced attacks.

With Darktraces machine learning approach, having a presence in China and working with a network provider in the region could see the company gain new kinds of insights into the larger global threat, subsequently passing on that benefit to other Darktrace users globally.

Updated with more investment detail from Darktrace.

Read more here:

More funding for AI cybersecurity: Darktrace raises $75M at an $825M valuation - TechCrunch

2020 predictions for AI in business – TechTalks

Image credit: Depositphotos

Artificial intelligence is already capable of some pretty wondrous things. From powering driverless vehicles and automated manufacturing systems to delivering web search results via human-like voice assistants, it grows more prominent every day.

While it is almost impossible to predict what AI will look like 10 or even five years from now, we can certainly speculate about the coming year. What can we expect to see in 2020 with AI and business applications?

As businesses look to capitalize on the vast benefits that AI has to offer, adoption will rise considerably. Not everyone will have direct access to the necessary technology and processing power, however. Thats where AI as a service comes into play. The tech world has yet to decide on a standard acronym for it, though some have gone with AIaaS.

IBMs Watson, Azure AI, and Google Cloud AI are just a few examples of how the technology applies in service-like settings. The servers carry the bulk of the processing power, allowing clients to tap into a remote solution. Quantum computing will further boost the remote service models rise to power.

Because a third party handles everything, businesses gain access to a low-cost AI solution with almost no risk and no buy-in.

Only a handful of markets arent suffering from a talent shortage. To make matters worse, as much as 85 percent of employees are either uninterested or actively disengaged at work.

Machine learning and other types of AI can help change that. Tools like Vibe or Keen allow managers to see how their employees are feeling, good or bad. Communication analysis tools powered by AI can help home in on morale problems and even suggest ways to improve the situation.

In the field of marketing, businesses and media managers are always looking for new ways to push the envelope. Its not just about getting eyes on marketing materials and content. Its about sharing a message that resonates with their interests and lifestyles.

With the help of AI, marketers can spice up their social media and email campaigns just enough to captivate audiences. Analysts expect AI to make considerable waves in the social media market, growing to 28.3 percent by 2023.

Imagine AI-developed ads and commercials designed from the ground up to target a specific audience or demographic. Big data and analytics solutions provide the necessary information to build such campaigns, while the AI solutions drive them forward.

AI can also power sales forecasting, customer insights, digital advertising, and even customer service. One of the more prominent uses of AI right now is as a customer support solution, answering queries and helping solve customer experience issues. Over the coming year, that will evolve to include many more facets of marketing.

One of the most lucrative uses of AI is the option to predict future events through data analysis.

With enough information flowing in, people can use machine learning algorithms to accurately predict or even pinpoint future changes coming down the pipeline. This approach allows businesses to adequately prepare for market and demand changes, supply shortages, growing competition and much more.

As more and more data amasses, and subsequently flows through AI-powered solutions, they will become smarter and more reliable, leading to frighteningly accurate prediction models. Complexity and sophistication will balloon, too, making AI the go-to for businesses that want to stay competitive and remain afloat.

In general, risk assessment is a broad field. Cybersecurity, third-party and vendor risks and investment risk are all part of the broader spectrum. All of them, however, share one thing in common: To be of any use, they must apply as continual and sustained processes. Mitigation is possible, but risk elimination is not.

AI solutions can take over for human eyes, not just providing 24/7 active monitoring, but also a system that learns and improves over time. Machine learning and deep neural network solutions offer some of the best and most versatile support.

Whether youre talking about the customer or employee-oriented personalization, manual processes complicate things too much to make it work. By injecting AI and automation solutions, it streamlines the entire system.

Netflix, for instance, uses machine learning to deliver targeted media and content suggestions to its users. Sesame Workshop developed an AI-powered vocabulary learning app. Initially, it observes a childs reading and vocabulary level and then delivers content and experiences to match their needs. It adjusts as they grow to stay in pace with their development.

Ultimately, personalization allows businesses to give customers exactly what they want when they want without soliciting them all the time. The AI solutions monitor their usage, habits and other performance data to discern what kinds of experiences, products, recommendations, and even marketing content matches their lifestyle.

AI is moving the entire world of business forward, from marketing and customer service to innovative employee experiences.

Its sure to become even more prevalent as the technology evolves and becomes smarter, more accurate and more responsive.

Go here to read the rest:

2020 predictions for AI in business - TechTalks

Diffbot attempts to create smarter AI that can discern between fact and misinformation – The Financial Express

The better part of the early 2000s was spent in creating artificial intelligence (AI) systems that could beat the Turing Test; the test is designed to determine if an AI can trick a human into believing that it is a human. Now, companies are in a race to create a smarter AI that is more knowledgeable and trustworthy. A few months ago, Open AI showcased GPT-3, a much smarter version of its AI bot, and now as per a report in MIT Technology Review, Diffbot is working on a system that can surpass the capabilities of GPT-3.

Diffbot is expected to be a smarter system as it works by reading a page as a human does. Using this technology, it can create knowledge graphs, which will contain verifiable facts. One of the problems that constant testing of GPT-3 reveals is that you still need a human to cross-verify information it is collecting. The Diffbot is trying to make the process more autonomous. The use of knowledge graphs is not unique to Diffbot; Google also uses them. The success of Diffbot will depend on how accurately it can differentiate between information and misinformation.

Give it will apply natural language processing and image recognition to virtually billions of web-pages, the knowledge graph it will build will be galactic. It will join Google and Microsoft in crawling nearly the entire web. Its non-stop crawling of the web means it knocks down its knowledge graph periodically, incorporating new information. If it can sift through data to verify information, it will indeed be a victory for internet companies looking to make their platforms more reliable.

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

See the rest here:

Diffbot attempts to create smarter AI that can discern between fact and misinformation - The Financial Express

Scientists are working with AI to measure chronic pain – Axios

Scientists are working on a way to use AI to create quantitative measurements for chronic pain.

Why it matters: Chronic pain is an epidemic in the U.S., but doctors can't measure discomfort as they can other vital signs. Building methods that can objectively measure pain can help ensure that the millions in need of palliative care aren't left to suffer.

What's happening: Late last month, scientists from IBM and Boston Scientific presented new research outlining a framework that uses machine learning and activity monitoring devices to capture and analyze biometric data that can correspond to the perception of pain.

What they're saying: "We want to use all the tools of predictive analytics and get to the point where we can predict where people's pain is going to be in the future, with enough time to give doctors the chance to intervene," says Jeff Rogers, senior manager for digital health at IBM Research.

Background: According to one estimate, more than 100 million Americans struggle with chronic pain, at an annual cost of as much as $635 billion in painkillers and lost productivity.

What's next: Rogers hopes the research can lead to medical devices that could predict chronic pain signals ahead of suffering and adjust their response accordingly.

Read more from the original source:

Scientists are working with AI to measure chronic pain - Axios

AI will be smarter than humans within 5 years, says Elon Musk – Express Computer

Tesla and SpaceX CEO Elon Musk has claimed that Artificial Intelligence will be vastly smarter than any human and would overtake us by 2025.

We are headed toward a situation where AI is vastly smarter than humans. I think that time frame is less than five years from now. But that doesnt mean that everything goes to hell in five years. It just means that things get unstable or weird, Musk said in an interview with New York Times over the weekend.

This is not the first time that Musk has shown concern related to AI. Back in 2016, Musk said that humans risk being treated like house pets by AI unless technology is developed that can connect brains to computers.

He even described AI as an existential threat to humanity.

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, its probably that, he said.

However, Musk helped found the artificial intelligence research lab OpenAI in 2015 with the goal of developing artificial general intelligence (AGI) that can learn and master several disciplines.

Recently, OpenAI released its first commercial product, a programme to make use of a text-generation tool that it once called too dangerous.

It has the potential to spare people from writing long texts. Once an application is developed on the basis of the programme, all they need to give is a prompt.

OpenAI earlier desisted from revealing more about the software fearing bad actors might misuse it for producing misleading articles, impersonate others or even automate phishing content.

IANS

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

View post:

AI will be smarter than humans within 5 years, says Elon Musk - Express Computer

A Clever AI-Powered Robot Learns to Get a Grip – WIRED

You remember claw machines, those confounded scams that bilked you out of your allowance. They were probably the closest thing you knew to an actual robot, really. They're not, of course, but they do have something very important in common with legit robots: They're terrible at handling objects with any measure of dexterity.

You probably take for granted how easy it is to, say, pick up a piece of paper off a table. Now imagine a robot pulling that off. The problem is that a lot of robots are taught to do individual tasks really well, with hyper-specialized algorithms. Obviously, you cant get a robot to handle everything itll ever encounter by teaching it how to hold objects one by one.

Nope, thats an AIs job. Researchers at the University of California Berkeley have loaded a robot with an artificial intelligence so it can figure out how to robustly grip objects its never seen before, no hand-holding required. And thats a big deal if roboticists want to develop truly intelligent, dexterous robots that can master their environments.

The secret ingredient is a library of point clouds representing objects, data that the researchers fed into a neural network. The way it's trained is on all those samples of point clouds, and then grasps, says roboticist Ken Goldberg , who developed the system along with postdoc Jeff Mahler . So now when we show it a new point cloud, it says, This here is the grasp, and it's robust. Robust being the operative word. The team wasnt just looking for ways to grab objects, but the best ways.

Matt Simon

Baxter the Robot Fixes Its Mistakes by Reading Your Mind

Matt Simon

Finally, the Robot Bat We Deserve and the Robot Bat We Need

Cade Metz

MIT Researchers Want to Teach Robots How to Wash Dishes

Using this neural network and a Microsoft Kinect 3-D sensor, the robot can eyeball a new object and determine what would be a robust grasp. When its confident its worked that out, it can execute a good grip 99 times out of 100.

It doesn't actually even know anything about that the object is, Goldberg says. It just says it's a bunch of points in space, here's where I would grasp that bunch of points. So it doesnt matter if it's a crumpled up ball of tissue or almost anything."

Imagine a day when robots infiltrate our homes to help with chores, not just vacuuming like Roombas but doing dishes and picking up clutter so the elderly dont fall and find themselves unable to get up . The machines are going to come across a whole lot of novel objects, and you, dear human, cant be bothered to teach them how to grasp the things. By teaching themselves, they can better adapt to their surroundings. And precision is pivotal here: If a robot is doing dishes but can only execute robust grasps 50 times out of 100, youll end up with one embarrassed robot and 50 busted dishes.

Heres where the future gets really interesting. Robots wont be working and learning in isolationtheyll be hooked up to the cloud so they can share information. So say one robot learns a better way to fold a shirt. It can then distribute that knowledge to other robots like it and even entirely different kinds of robots . In this way, connected machines will operate not only as a global workforce, but as a global mind.

At the moment, though, robots are still getting used to our world. And while Goldbergs new system is big news, it aint perfect. Remember that the robot is 99 percent precise when its already confident it can manage a good grip. Sometimes it goes for the grasp even when it isnt confident, or it just gives up. So one of the things we're doing now is modifying the system, Goldberg says, and when it's not confident rather than just giving up it's going to push the object or poke it, move it some way, look again, and then grasp.

Fascinating stuff. Now if only someone could do something about those confounded claw machines.

Follow this link:

A Clever AI-Powered Robot Learns to Get a Grip - WIRED

4 AI & Cybersecurity Stocks to Gain From the New Normal – Yahoo Finance

With technological advancements driving businesses and customers online, AI and cybersecurity seem to have become a necessity. Additionally, the remote working trend due to the COVID-19 pandemic has boosted demand for cybersecurity.

AI in particular promises to change cybersecurity in the coming years by possibly enhancing both cyber defense and crime.

Is AI Influencing Cybersecurity?

In todays time, almost everything is just a click away, be it clothes, groceries, investment in stocks, or just catching up with friends. However, as easy as it sounds, it has its own share of security threat. This has driven businesses to deploy cyber AI to not only protect themselves but also their customers.

In fact, cybersecurity has been a concern since the birth of Internet. And as technological evolution continues, the risks keep rising. In 2014, Yahoo! encountered a cyberattack affecting 500 million user accounts and 200 million usernames sold. This holds the record of being the largest cyberattack on a single company to date.

Now, what role does AI play in cybersecurity? AI can identify and prevent cyberattacks.In fact, AI ensures minimum human involvement in cybersecurity affairs, reducing the scope for errors. AI-based websites can detect any sort of unauthorized entry, making it difficult for hackers to get access.

However, just identifying the threat cannot save a website from cyber attackers. AI can now help prevent a cyberattack by thinking the way a hacker does to break a security code and gain entry to the target website. Hence, before the hacker identifies a weaker point of attack, AI can assess the situation and make the necessary changes.

Additionally, AI doesnt require any break because it is programmed to deal with high-risk tasks without any concern. Furthermore, AI has undergoing rapid change and progress in recent years. It can now not only offer mere technical assistance to cybersecurity experts but also track down cyber security breach and inform appointed personals to take apt measures to rectify the same in no time.

The growth of AI in cybersecurity has been commendable in recent years with massive digitalization across the globe.Further, the coronavirus pandemic has forced employers to initiate work-from-home practices, driving adoption of IoT and the number of connected devices.

Hence, rising instances of cyberattacks, growing concerns of data transfer and vulnerability of wireless networks to attacks on data integrity have expanded the scope of AI in cybersecurity. Annual spending on vulnerability management activities increased to $1.4 million in 2019, highlighting an increase of an average of $282,750 from 2018.

5 Stocks to Watch Out For

Per a Capgemini Research Institute study, one in five cybersecurity firms were employing AI before 2019 but the adoption is likely to skyrocket by the end of 2020. In fact, 63% of the firms are planning to deploy AI in their solutions.

Per MarketsAndMarkets research, the global AI in cybersecurity market is projected to reach $38.2 billion by 2026 from $8.8 billion in 2019, at the highest CAGR of 23.3%. In fact, research suggests that the market is estimated to be valued at $12 billion by the end of 2020.

Given the immense scope, it is prudent to invest in AI-drivencybersecurity stocks. We have,thus, shortlisted four such stocks that are poised to grow.

CrowdStrike Holdings, Inc. CRWD provides cloud-native endpoint protection software. The Falcon platform automatically investigates threats and takes the guesswork out of threat analysis. The company has an expected earnings growth rate of 94.4% for the current quarter against the ZacksInternet - Softwareindustrys projected decline of more than 100%.

The Zacks Consensus Estimate for its current-year earnings has climbed 50% over the past 60 days. CrowdStrike holds a Zacks Rank #2(Buy).You can seethe complete list of todays Zacks #1 (Strong Buy) Rank stocks here.

Fortinet, Inc. FTNT provides security solutions to all parts of IT infrastructure. The companys AI-based product, FortiWeb, is a web application firewall that uses machine learning and two layers of statistical probabilities to accurately detect threats.

The company has an expected earnings growth rate of 12.6% for the current year against the ZacksSecurityindustrys estimated decline of 15.3%. The Zacks Consensus Estimate for its current-year earnings has moved 6.1% up over the past 60 days. Fortinet holds a Zacks Rank #2.

Story continues

Palo Alto Networks, Inc. PANW offers firewalls and cloud security for threat detection and endpoint protection. The company that belongs to the ZacksSecurityindustry has an expected earnings growth rate of 15.2% for the next quarter. The Zacks Consensus Estimate for its current-year earnings has moved 6.5% up over the past 60 days. Palo Alto Networks carries a Zacks Rank #3 (Hold).

Check Point Software Technologies Ltd. CHKP provides computer and network security solutions to governments and enterprises. Its IntelliStore provides customizable threat intelligence, letting companies and organizations choose real-time threat intelligence sources that fit their needs.

The company has an expected earnings growth rate of 3.4% for the current year against the ZacksSecurityindustrys estimated decline of 15.3%. The Zacks Consensus Estimate for its next-year earnings has moved 0.2% up over the past 60 days. Check Point Software holds a Zacks Rank #3.

Breakout Biotech Stocks with Triple-Digit Profit Potential

The biotech sector is projected to surge beyond $775 billion by 2024 as scientists develop treatments for thousands of diseases. Theyre also finding ways to edit the human genome to literally erase our vulnerability to these diseases.

Zacks has just released Century of Biology: 7 Biotech Stocks to Buy Right Now to help investors profit from 7 stocks poised for outperformance. Our recent biotech recommendations have produced gains of +50%, +83% and +164% in as little as 2 months. The stocks in this report could perform even better.

See these 7 breakthrough stocks now>>

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free reportCheck Point Software Technologies Ltd. (CHKP) : Free Stock Analysis ReportFortinet, Inc. (FTNT) : Free Stock Analysis ReportPalo Alto Networks, Inc. (PANW) : Free Stock Analysis ReportCrowdStrike Holdings Inc. (CRWD) : Free Stock Analysis ReportTo read this article on Zacks.com click here.Zacks Investment Research

Here is the original post:

4 AI & Cybersecurity Stocks to Gain From the New Normal - Yahoo Finance

Wyze will try pay-what-you-want model for its AI-powered person detection – The Verge

Smart home company Wyze is experimenting with a rather unconventional method for providing customers with artificial intelligence-powered person detection for its smart security cameras: a pay-what-you-want business model. On Monday, the company said it would provide the feature for free as initially promised, after it had to disable it due to an abrupt end to its licensing deal with fellow Seattle-based company Xnor.ai, which was acquired by Apple in November of last year. But Wyze, taking a page out of the old Radiohead playbook, is hoping some customers might be willing to chip in to help it cover the costs.

AI-powered person detection uses machine learning models to train an algorithm to differentiate between the movement of an inanimate object or animal and that of a human being. Its now a staple in the smart security camera market, but it remains rather resource-intensive to provide and expensive as a result. It is more expensive than Wyze at first realized, in fact. Thats a problem after the company promised last year that when its own version of the feature was fully baked, it would be available for free without requiring a monthly subscription, as many of its competitors do for similar AI-powered functions.

Yet now Wyze says its going to try a pay-what-you-want model in the hopes it can use customer generosity to offset the bill. Heres how the company broke the good (and bad) news in its email to the customers eligible for the promotion, which includes those that were enjoying person detection on Wyze cameras up until the Xnor.ai contract expired at the end of the year:

Over the last few months, weve had this service in beta testing, and were happy to report that the testing is going really well. Person Detection is meeting our high expectations, and its only going to keep improving over time. Thats the good news.

The bad news is that its very expensive to run, and the costs are recurring. We greatly under-forecasted the monthly cloud costs when we started working on this project last year (weve also since hired an actual finance guy). The reality is we will not be able to absorb these costs and stay in business.

Wyze says that while it would normally charge a subscription for a software service that involves recurring monthly costs, it told about 1.3 million of its customers that it would not charge for the feature when it did arrive, even if it required the company pay for pricey cloud-based processing. We are going to keep our promise to you. But we are also going to ask for your help, Wyze writes.

It sounds risky, and Wyze admits that the plan may not pan out:

When Person Detection for 12-second event videos officially launches, you will be able to name your price. You can select $0 and use it for free. Or you can make monthly contributions in whatever amount you think its worth to help us cover our recurring cloud costs. We will reevaluate this method in a few months. If the model works, we may consider rolling it out to all users and maybe even extend it to other Wyze services.

If Wyze is able to recoup its costs by relying on the goodwill of customers, it could set the company up to try more experimental pricing models. After all, radical pricing strategies and good-enough quality is how Wyze became a bit of a trailblazer in the smart home camera industry, and it could work out for them again if customers feel like the feature works so well it warrants chipping in a few bucks a month.

View post:

Wyze will try pay-what-you-want model for its AI-powered person detection - The Verge

Discover the Power of VUNO’s AI Solutions at ECR 2020 – WFMZ Allentown

SEOUL, South Korea, July 14, 2020 /PRNewswire/ -- VUNO Inc., South Korean artificial intelligence (AI) developerannounced that it will attend the European Congress of Radiology 2020 (ECR 2020)to be held fromJuly 15 to July 19, 2020 to showcase the power of its flagship AI radiology solutions that have recently received the CE mark.As part of its ambitious plan to go global, VUNO is set to seize this opportunity to further expand its networks with sales prospects and business partners from all around the world.

ECR, one of the leading events in the field of radiology, will be joined by industry experts, medical & healthcare professionals, modality manufacturers and solutions developers, etc. In the light of the COVID 19 epidemic, the congress organizers have decided to opt for an online-only event. With about 2,100 leading industry representatives onboard, the exhibition of ECR 2020 can be accessed from 08:00 a.m. July 15 to 21, until 11:55 p.m. CEST. Free registration & participation is available on the ECR 2020 Virtual Exhibition website. (https://ecr2020.expo-ip.com/).

VUNO's exhibition at the event will include VUNO MedLungCT AI that detects, locates, and quantifies pulmonary lung nodules on CT images, VUNO Med-Chest X-Ray that assists in the chest X-ray readings of common thoracic abnormalities on chest radiographs. VUNO Med-DeepBrain is a diagnostic supporting tool for degenerative brain diseases through brain parcellation & quantification on brain MR images.

On top of the three solutions to be showcased at this event, VUNO has two other solutions that have gained CE certifications recently- VUNO Med-BoneAge and VUNO Med Fundus AI. All these five products are now available to be marketed in countries where CE marking is acceptable.

VUNO Med solutions are designed to be agnostic to any devices and any environments offering seamless integration with any PACS and/or EMR systems. They are offered via cloud servers allowing users to analyze images anytime, anywhere with access to the Internet. They are also available through on-premise installations as well.

VUNO has the highest number of clients with more than120 medical institutions in Korea alone. With its huge successes under its belt rooted in the proven effectiveness and safety through clinical trials and practices, the company is embarking on a new endeavor to proclaim its technical prowess in overseas markets now by signing partnerships with global healthcare giants like M3, a SONY subsidiary and Japan's largest data platform company.

For more detailed information on VUNO, visit https://www.vuno.co/.

Read the original:

Discover the Power of VUNO's AI Solutions at ECR 2020 - WFMZ Allentown

How to create real competitive advantage through business analytics and ethical AI – UNSW Newsroom

Some Australian organisations, which either feature large data science teams or are born digital with a data-driven culture, have advanced analytics capabilities (such as undertaking predictive and prescriptive analytics). For example, dedicated data science teams in marketing will build neural network models to predict customer attrition and the success of cross-selling and up-selling. However, most organisations that use data in their decision-making primarily rely on descriptive analytics.

While descriptive analytics may seem simplistic compared to creating predictions and running optimisation algorithms, descriptive analytics offers firms tremendous value by providing an accurate and up-to-date view of the business. For most organisations, analytics which may even be labelled as advanced analytics takes the form of dashboards; and, for many organisational tasks, understanding trends and the current state of the business is sufficient to make evidence-based decisions.

Moreover, dashboards provide a foundation for creating a more data-driven culture and are the first step for many organisations in their analytics journey. That said, by strictly relying on dashboards, organisations are missing opportunities for leveraging predictive analytics to create competitive advantages.

Despite the importance of analytics, firms are at different stages of their analytics journey. Some firms utilise suites of complex artificial intelligent technologies, while many others still use Microsoft Excel as their main platform for data analysis. Unfortunately, the process of obtaining organisational value from analytics is far from trivial, and the organisational benefits provided by analytics are almost equalled by the challenges required for successful implementation.

My colleagueProf. Richard Vidgenrecentlyundertook a Delphi studyto reach a consensus on the importance of key challenges in creating value from big data and analytics. Managers overwhelmingly agreed that there were two significant issues. The first is the wealth of issues related to data: assuring data quality, timeliness and accuracy, linking data to key decisions, finding appropriate data to support decisions and issues pertaining to databases.

The second set of challenges pertains to people: building data skills in the organisation,upskilling current employees to utilise analytics, massive skill shortages across both analytics and the IT infrastructure supporting analytics, and building a corporate data culture (which includes integrating data into the organisations strategy). While issues related to data quality are improving, the skill gap and lack of emphasis on data-driven decision making are systemic issues that will require radical changes in Australian education and Australian corporate culture.

Although there are many interesting trends in terms of the advancements of analytics like automated machine learning platforms (such as DataRobot and H2O), the greatest challenge with analytics and AI is going to be ensuring their ethical use.

Debate and governance around data usage are still in their infancy, and with time, analytics, black-box algorithms, and AI are going to come under increasing scrutiny. For example, Australias recent guidelines on ethical AI, where AI can be thought of as a predictive outcome created by an algorithm or model, include:

Achieving these goals with standard approaches to analytics is a challenging enough endeavour for organisations, due to the black-box nature of analytics, algorithms and AI. However, decisions driven by algorithms and analytics are now increasingly interacting with other organisations AI, which makes it even more difficult to predict the fairness and explainability of outcomes. For example, AI employed by e-commerce retailers to set prices can participate in collusion and driving up prices by mirroring and learning from competing AIs behaviours without human interference, knowledge or explicit programming for collusion.

As predictive analytics and AI will fundamentally transform almost all industries, it is critical that organisations adapt ethically. Organisations should implement frameworks to guide the use of AI and analytics, which explicitly incorporate fairness, transparency, explainability, contestability, and accountability.

A significant aspect of undertaking ethical AI and ethical analytics is optimising and selecting models and algorithms that incorporate ethical objectives. Typically, analytics professionals often select models based on their ability to make successful predictions on validation and hold-out data (that is, data that the model has never seen). However, rather than simply looking at prediction accuracy, analysts should incorporate issues related to transparency. For example, decision trees, which are a collection of if-then rules that connect branches of a tree, have simple structures and interpretations. They are highly visual, which enables analysts to easily convey the underlying logic and features that drive predictions to stakeholders.

Moreover, business analytics professionals can carefully scrutinise the nodes of the decision tree to determine if the criteria for the decision rules built into the model are ethical. Thus, rather than using advanced neural networks which often provide higher accuracy to models like decision trees but are effectively black-boxes, analysts should consider sacrificing slightly on performance in favour of transparency offered by simpler models.

AGSM Scholar and Senior Lecturer Sam Kirshner, UNSW Business School.

Sam Kirshner is a Senior Lecturer in the School of Information Systems and Technology Managementat UNSW Business School andis a member of the multidisciplinary Behavioural Insights for Business and Policy and Digital Enablement Research Networks.

For the full story visitUNSW Business Schools BusinessThink.

Here is the original post:

How to create real competitive advantage through business analytics and ethical AI - UNSW Newsroom

A Groundbreaking New AI Taught Itself to Speak in Just a Few Hours – Futurism

Giving Machines a Voice

Last year, Google successfully gave a machine the ability to generate human-like speech through its voice synthesis program called WaveNet. Powered by Googles DeepMind artificial intelligence (AI) deep neural network, WaveNet produced synthetic speech using given texts. Now, Chinese internet search company Baidu has developed the most advanced speech synthesis program ever, and its called Deep Voice.

Developed in Baidus AI research lab based in Silicon Valley, Deep Voice presents a big breakthrough in speech synthesis technology by largely doing away with the behind-the-scenes fine-tuning typically necessary for suchprograms. As such, Deep Voice can learn how to talk in a matter of a few hours and with virtually no help from humans.

Deep Voice uses a relatively simple method: through deep-learning techniques, Deep Voice broke down texts into phonemes which is sound at its smallest perceptually distinct units. A speech synthesis network then reproduced these sounds. The need for any fine-tuning was greatly reduced because every stage of the process relied on deep-learning techniques all researches needed to dowas train the algorithm.

For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original, the Baidu researchers wrote in a study published online. By using a neural network for each component, our system is simpler and more flexible than traditional text-to-speech systems, where each component requires laborious feature engineering and extensive domain expertise.

Text-to-speech systems arent entirely new. Theyre present in many of the worlds modern gadgets and devices.From simpler ones like talking clocks and answering systems in phones to more complex versions, like those in navigation apps. These, however, have been made using large databases of speech recordings. As such, the speech generated by these traditional text-to-speech systems dont flowas seamless as actual human speech.

Baidus work on Deep Voice is a step towards achieving human-like speech synthesis in real time, without using pre-recorded responses. Baidus Deep Voice puts together phonemes in such a way that it sounds like actual human speech. We optimize inference to faster-than-real-time speeds, showing that these techniques can be applied to generate audio in real-time in a streaming fashion, their researchers said.

However, there are still certain variables that their new system cannot yet control: the stresses on phonemes and the duration and natural frequency of each sound. Once perfected, control of these variables would allow Baidu to change the voice of the speaker and, possibly, the emotions conveyed by a word.

At the very least, this would be computationally demanding, limiting just how much Deep Voice can be used in real-time speech synthesis in the real world. As thethe Baidu researchersexplained:

In the future, better synthesized speech systems can be used to improvethe assistant features found in smartphones and smart home devices. At the very least, it wouldmake talking to your devices feel more real.

See the original post:

A Groundbreaking New AI Taught Itself to Speak in Just a Few Hours - Futurism

AI for Quitting Tobacco Initiative – World Health Organization

Meet Florence, WHO's first virtual health worker, designed to help the world's 1.3 billion tobacco users quit.She uses artificial intelligence to dispel myths around COVID-19 and smoking and helps people develop a personalized plan to quit tobacco.

Users can rely on Florenceas a trusted source of information to achieve their quit goals. She can also help recommend tobacco users to further national toll-free quit lines or apps that can help you with your quit journey. You can interact with her via video or text.

Around 60% of tobacco users worldwide say they want to quit, only 30% of them have access to the tools they need, like counsellors, to take action.

Quitting smoking is more important than ever as evidence reveals that smokers are more vulnerable than non-smokers to developing a severe case of COVID-19.

Florence develops your quit plan using the 'STAR' method:

Set a quit date.It is important to set a quit date as soon as possible. Giving yourself a short period to quit will keep you focused and motivated to achieve your goal.

Tell your friends, family, and coworkers.It is important to share your goal to quit with those you interact frequently.

Anticipate challenges to the upcoming quit attempt.Particularly during the critical first few weeks, which arethe hardest due to potential nicotine withdrawal symptoms as well as the obstacles presented by breaking any habit.

Remove tobacco products from your environment.Its best to rid yourself of such temptations by making a smoke free house, avoiding smoking areas, and asking your peers to not smoke around you.

Florence was created with technology developed by San Francisco and New Zealand based Digital People company Soul Machines, with support from Amazon Web Services and Google Cloud.

Read the original:

AI for Quitting Tobacco Initiative - World Health Organization

The AI Revolution Is Here – A Podcast And Interview With Nate Yohannes – Forbes

Nates perspective on AI being built for everybody on the planet is birthed from one of the most unique foundations possible. Hes the offspring of a revolutionary who stepped on a landmine in 1978 fighting for democracy on the Horn of Africa, Eritrea, one of the worst violators of human rights in the world to becoming a lawyer who was then appointed by President Obama to serve on behalf of the White House. Nates father losing much of his vision in the landmine attack was the catalyst for his passion for AI Computer Vision; computers reasoning over people, places and things.

Nate Yohannes AI, Microsoft

His role at Microsoft AI merges the world of business and product strategy, while he works closely with Microsofts AI Ethics & Society team. Nate believes that Microsofts leadership decision to embed Ethics & Society into engineering teams is one of the most durable advantages they offer design products with the filter of ethics up front is unique and valuable for everyone. AI is the catalyst for the fourth industrial revolution - the most significant technological advancement thus far and, AI has the potential to solve incredible challenges for all of humanity (climate, education, design, customer experiences, governance, food, etc.). The biggest concern could be the potential for un-expected and un-intended consequences when building and deploying AI products. Very similar to the unintended consequences we see today with social media companies and the misuse of privacy and data. AI will change the world, how it does this is our choice. Its critical to have appropriate representation at decision making tables when building AI products to mitigate thousands or millions of unexpected consequences potentially. From gender and race to financial, health and even location-based data. Solving this challenge of the unexpected consequences and incorporating inclusivity shouldnt hinder innovation and the ambition from maximizing revenue; instead, it should enhance it. Creating products that will have the most extensive consumer base possible, everyone. Its an inspiring conversation about how to make the possible a reality with a different mindset.

This should be a guiding light for how all companies develop AI for the highest good (not just greater good). If every company or even the government will be a digital platform by 2030, OK, 75% of us will be, then AI will sit at the center of these organizations.

Nate Yohannes Speaking to AI.

Doing it the right way is part of the puzzle. Thinking more about how it can be applied to the whole world is the tantalizing promise. Nate Yohannes is a Principal Program Manager for Mixed Reality & AI Engineering at Microsoft. He recently was a Director of Corporate Business Development & Strategy for AI, IoT & Intelligent Cloud. Hes on the Executive Advisory Board of the Nasdaq Entrepreneurial Center and an Expert for MITs Inclusive Innovation Challenge. From 2014 2017, he served in President Obamas administration as the Senior Advisor to the Head of Investments and Innovation, US Small Business Administration and on the White House Broadband Opportunity Council.

Nate was selected for the inaugural White House Economic Leadership class. He started his career as the Assistant General Counsel at the Money Management Institute. He is a graduate of the State University of New York College at Geneseo and of the University of Buffalo School of Law, where he was a Barbara and Thomas Wolfe Human Rights Fellow. Hes admitted to practice law in New York State.

Read more from the original source:

The AI Revolution Is Here - A Podcast And Interview With Nate Yohannes - Forbes

AI, machine learning to impact workplace practices in India: Adobe – YourStory.com

Over 60 percent of marketers in India believe new-age technologies are going to impact their workplace practices and consider it the next big disruptor in the industry, a new report said on Thursday.

According to a global report by software major Adobe that involved more than 5,000 creative and marketing professionals across the Asia Pacific (APAC) region, over 50 percent respondents did not feel concerned by artificial intelligence (AI) or machine learning.

However, 27 percent in India said they were extremely concerned about the impact of these new technologies.

Creatives in India are concerned that new technologies will take over their jobs. But they suggested that as they embrace AI and machine learning, creatives will be able to increase their value through design thinking.

While AI and machine learning provide an opportunity to automate processes and save creative professionals from day-to-day production, it is not a replacement to the role of creativity, said Kulmeet Bawa, Managing Director, Adobe South Asia.

It provides more levy for creatives to spend their time focusing on what they do best being creative, scaling their ideas and allowing them time to focus on ideation and creativity, Bawa added.

A whopping 59 percent find it imperative to update their skills every six months to keep up with the industry developments.

The study also found that merging online and offline experiences was the biggest driver of change for the creative community, followed by the adoption of data and analytics, and the need for new skills.

It was revealed that customer experience is the number one investment by businesses across APAC.

Forty-two per cent of creatives and marketers in India have recently implemented a customer experience programme, while 34 percent plan to develop one in the one year.

The study noted that social media and content were the key investment areas by APAC organisations, and had augmented the demand for content. However, they also presented challenges.

Budgets were identified as the biggest challenge, followed by conflicting views and internal processes. Data and analytics become their primary tool to ensure that what they are creating is relevant, and delivering an amazing experience for customers, Bawa said.

More here:

AI, machine learning to impact workplace practices in India: Adobe - YourStory.com

How to make AI less racist – Bulletin of the Atomic Scientists

CaptionBot, an AI program that applies captions to images, mistakenly described the members of the hip hop group the Wu-Tang Clan as a group of baseball players. This type of mistake often occurs because of the way certain demographics are represented in the data used to train an AI system. Credit: Walter Scheirer/CaptionBot.

In 2006, a trio of artificial intelligence (AI) researchers published a useful resource for their community, a massive dataset consisting of images representing over 50,000 different noun categories that had been automatically downloaded from the internet. The dataset, dubbed Tiny Images, was an early example of the big data strategy in AI research, whereby an algorithm is shown as many examples as possible of what it is trying to learn in order for it to better understand a given task, like recognizing objects in a photo. By uploading small 32-by-32 pixel images, the Tiny Images researchers were relying on the ability of computers to exhibit the same remarkable tolerance of the human visual system and recognize even degraded images. They also, however, may have unintentionally succeeded in recreating another human characteristic in AI systems: racial and gender bias.

A pre-print academic paper revealed that Tiny Images used several categories for images labeled with racial and misogynistic slurs. For instance, a derogatory term for sex workers was one category; a slur for women, another. There was also a category of images labeled with a racist term for Black people. Any AI system trained on the dataset might recreate the biased categories as it sorted and identified objects. Tiny Images was such a large dataset and its contents so small, that it would have been a herculean, perhaps impossible, task to perform quality control to remove the offensive category labels and images. The researchers, Antonio Torralba, Rob Fergus, and Bill Freeman, made waves in the artificial intelligence world when they announced earlier this summer that they would be pulling the whole thing from public use.

Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community precisely those that we are making efforts to include, Torralba, Fergus, and Freeman wrote. It also contributes to harmful biases in AI systems trained on such data.

Developers used the Tiny Images data as raw material to train AI algorithms that computers use to solve visual recognition problems and recognize people, places, and things. Smartphone photo apps, for instance, use similar algorithms to automatically identify photos of skylines or beaches in your collection. The dataset was just one of many used in AI development.

While the pre-prints shocking findings about Tiny Images was troubling in its own right, the issues the paper highlighted are also indicative of a larger problem facing all AI researchers: the many ways in which bias can creep into the development cycle.

The big data era and bias in AI training data. The internet reached an inflection point in 2008 with the introduction of Apples iPhone3G. This was the first smartphone with viable processing power for general internet use, and, importantly, it included a digital camera that could be available to the user at a moments notice. Many manufacturers followed Apples lead with similar competing devices, bringing the entire world online for the first time. A software innovation that appeared with these smartphones was the ability of an app running on a phone to easily share photos from the camera to privately owned cloud storage.

This was the launch of the era of big data for AI development, and large technology companies had an additional motive beyond creating a good user experience: By concentrating a staggering amount of user-generated content within their own platforms, companies could exploit the vast repositories of data they collected to develop other products. The prevailing wisdom in AI product development has been that if enough data is collected, any problem involving intelligence can be solved. This idea has been extended to everything from face recognition to self-driving cars, and is the dominant strategy for attempting to replicate the competencies of the human brain in a computer.

But using big data for AI development has been problematic in practice.

The datasets used for AI product development now contain millions of images, and nobody knows what exactly is in them. They are too large to examine manually in an exhaustive manner. When it comes to the use of these sets, the data can be labeled or unlabeled. If the data is labeled (as was the case with Tiny Images), those labels can be tags that were taken from the original source, new labels assigned by volunteers or people who have been paid to provide them, or automatically generated by an algorithm trained to label data.

Dataset labels can be naturally bad, reflecting the biases and outright malice of the humans who annotated them, or artificially bad, if the mistakes are made by algorithms. A dataset could even be poisoned by malicious actors intending to create problems for any algorithms that make use of it. Additionally, some datasets contain unlabeled data, but these datasets, used in conjunction with algorithms that are designed to explore a problem on their own, arent the antidote for poorly labeled data. As is also the case for labeled datasets, the information contained within unlabeled datasets can be a mismatch with the real world, for instance when certain demographics are under- or over-represented in the data.

In 2016, Microsoft released a web app called CaptionBot that automatically added captions to images. The app was meant to be a successful demonstration of the companys computer vision API for third-party developers, but even under normal use, it could make some dubious mistakes. In one notable instance, it mislabeled a photo of the hip hop group Wu-Tang Clan as a group of baseball players. This type of mistake can occur when a bias exists in the dataset for a specific demographic. For example, Black athletes are often overrepresented in datasets assembled from public content on the internet. The prominent Labeled Faces in the Wild dataset for face recognition research contains this form of bias.

More problematic examples of this phenomenon have surfaced. Joy Buolamwini, a scholar at the MIT Media Lab, and Timnit Gebru, a researcher at Google, have shown that commonly used datasets for face recognition algorithm development are overwhelmingly composed of lighter skinned faces, causing face recognition algorithms to have noticeable disparities in matching accuracy between darker and lighter skin tones. Buolamwini, working with Deborah Raji, a student in her laboratory, has gone on to demonstrate this same problem in Amazons Rekognition face analysis platform. These studies have prompted IBM to announce it would exit the facial recognition and analysis market and Amazon and Microsoft to halt sales of facial recognition products to law enforcement agencies, which use the technology to identify suspects.

The manifestation of bias often begins with the choice of an application, and further crops up in the design and implementation of an algorithm well before any training data is provided to it. For instance, while its not possible to develop an algorithm to predict criminality based on a persons face, algorithms can seemingly produce accurate results on this task. This is because the datasets they are trained on and evaluated with have obvious biases, such as mugshot photos that contain easily identifiable technical artifacts specific to how booking photos are taken. If a development team believes the impossible to be possible, then the damage is already done before any tainted data makes its way into the system.

There are a lot of questions AI developers should consider before launching a project. Gebru and Emily Denton, also a researcher at Google, astutely point out in a recent tutorial they presented at the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition that interdisciplinary collaboration is a good path forward for the AI development cycle. This includes engagement with marginalized groups in a manner that fosters social justice, and dialogue with experts in fields outside of computer science. Are there unintended consequences of the work being undertaken? Will the proposed application cause harm? Does a non-data driven perspective reveal that what is being attempted is impossible? These are some of the questions that need to be asked by development teams working on AI technologies.

So, was it a good thing for the research team responsible for Tiny Images to issue a complete retraction of the dataset? This is a difficult question. At the time of this writing, the published paper on the dataset has collected over 1,700 citations according to Google Scholar and the dataset it describes is likely still being used by those who have it within their possession. The retraction of Tiny Images creates a secondary problem for researchers still working with algorithms developed with Tiny Images, many of which are not flawed and worthy of further study. The ability of researchers to replicate a study is a scientific necessity, and by removing the necessary data, this becomes impossible. And an outright ban on technologies like face recognition may not be a good idea. While AI algorithms can be anti-social in contextslikepredictive policing, they can be socially acceptable in others, such as human-computer interaction.

Perhaps one way to address this is to issue a revision of a dataset that removes any problematic information, notes what has been removed from the previous version, and provides an explanation for why the revision was necessary. This may not completely remove the bias within a dataset, especially if it is very large, but it is a way to address specific instances of bias as they are discovered.

Bias in AI is not an easy problem to eliminate, and developers need to react to it in a constructive way. With effective mitigation strategies, there is certainly still a role for datasets in this type of work.

See the original post:

How to make AI less racist - Bulletin of the Atomic Scientists

Andrew Ng announces Deeplearning.ai, his new venture after … – TechCrunch

Andrew Ng, the former chief scientist of Baidu, announced his next venture, Deeplearning.ai, with only a logo, a domain name and a footnote pointing to an August launch date. In an interesting twist, the Deeplearning.ai domain name appears to be registered to Baidus Sunnyvale AI research campus the same office Ng would have worked out of as an employee.

Its unclear whether Ng began his work on Deeplearning.ai while still an employee at Baidu. According to data pulled from theWayback Machine, the domain was parked at Instra and picked up sometime between 2015 and 2017.

Registering that domain to Baidu accidentally would be an amateur mistake and registering it intentionally just leaves me with more unanswered questions. Im left wondering about the relationship between Baidu and Deeplearning.ai and its connection to Andrew Ngs departure. Of course, its also possible that there was some sort of error that caused an untimely mistake.

UPDATE: Baidu provided us the following response.

Baidu has no association with this project but we wish Andrew the best in his work.

Ng left the company in late March of this year, promising to continue his work of bringing the benefits of AI to everyone. Baidu is known for having unique technical expertise in natural language processing and its recently been putting resources into self-driving cars and other specific deep learning applications.

It makes sense that Ng would take advantage of his name recognition to raise a large round to maximize his impact on the machine intelligence ecosystem. I cant see a general name like Deeplearning.ai being used to sell a self-driving car company or a verticalized enterprise tool. Its more likely that Ng is building an enabling technology that aims to become critical infrastructure to support the adoption of AI technologies.

While this could technically encompass specialized hardware chips for deep learning, Im more inclined to bet that it is a software solution given Ngs expertise. Google CEO Sundar Pichai made a splash back at I/O last month when he discussed AutoML the companys research work to automate the design process of neural networks. If I was going to come up with a name for a company that would build on, and ultimately commercialize, this technology, it would be Deeplearning.ai.

This is super speculative, but I think it might be an AI tool to help generate AI training data sets or something else that will accelerate the development of AI models and products, Malika Cantor, partner at AI investment firm Comet Labs told me. Im very excited about having more tools and platforms to support the AI ecosystem.

Prior to his time at Baidu, Ng was instrumental in building out the Google Brain Team, one of the companys core AI research groups.Ng is a highly respected researcher and evangelist in the AI space with connections spanning industries and geographic borders. If Ng truly believes that AI is the new electricity, he will surely try to position Deeplearning.ai to take advantage of the windfall.

Weve reached out to both Baidu and Andrew Ng and will update this post if we receive additional information.

View original post here:

Andrew Ng announces Deeplearning.ai, his new venture after ... - TechCrunch