Can Emotional AI Supersede Humans or Is It Another Urban Hype? – Analytics Insight

Humans have often sought the fantasy of having someone who understands them. Be it a fellow companion, a pet or even a machine. No doubt man is a social animal. Yet, this may not be the exact case in case of a man engineered machine or system. Although, machines are now equipped with AI that helps them beat us by sifting through scores of data and analyze them, provide a logical solution when it comes to emotional IQ this is where man and the machine draw the line. Before you get excited or feel low, AI is now in a race to integrate the emotional aspect of intelligence in its system. Now the question is, Is it worth the hype?

We are aware of the fact that facial expressions need not be the same as what one feels inside. There is always a possibility of disconnect by a huge margin. Assuming that AI can recognize these cues by observing and comparing it with existing data input is a grave simplification of a process that is subjective, intricate, and defies quantification. For example, a smile is different from a smug, smirk.

A smile can mean genuine happiness, enthusiasm, trying to put a brave face even when hurt or an assassin plotting his next murder. This confusion exists even in gestures too. Fingers continuously folding inwards the palm can mean Come here at some places while at other places it means Go away. This brings another major issue in light: cross-cultural and ethnic references. An expression can hold a different meaning in different countries. Like thumbs-up gesture is typically considered as well done or to wish Good Luck or to show agreement. In Germany and Hungary, the upright thumb means the number 1. But, it represents the number 5 in Japan. Whereas in places like the Middle East, thumbs-up is a highly offensive thumbs-down. The horn fingers gestures can mean to rock and roll at an Elvis Presley themed or heavy metal concert. But in Spain, it means el cornudo which means translates as your spouse is cheating on you. Not only that pop culture symbols like the Vulcan salute from Star Trek may not be known to people who have not seen the series.

Not only that, but it is also found that AI tends to assign negative emotions to people of color even when they are smiling. This racial bias can cause severe consequences in the workplace hampering their career progression. In recruitments where AI is trained on analyzing male behavior patterns and features is prone to make faulty decisions and flawed role allocation in female employees. Furthermore, people show different emotional range as they grow up. A child may be more emotionally engaging than an adult who is reserved about expressing them. This can be a major glitch in automatic driving cars or AI which specifically studies the drowsiness of the driver. Elderly and sick people may give the impression of being tired and sick in comparison to a standardized healthy guy.

If we must opt for upgrading AI with emotional intelligence and unassailable, we must consider the exclusivity of the focus groups who are used to train the system. AI has to understand rather than be superficially emotional. Hence the AI has to be consumer adaptive just like humans. We need to bring out the heterogeneous interpretation in the way humans express their emotions. At the office, we have to understand how emotionally engaged the employees are. Whether it is the subjective nature of emotions or discrepancies in emotions, it is clear that detecting emotions is no easy task. Some technologies are better than others at tracking certain emotions, so combining these technologies could help to mitigate bias. Only then it can become immune to unforgiving criticisms.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

More here:

Can Emotional AI Supersede Humans or Is It Another Urban Hype? - Analytics Insight

Adobe Extends AI Offering – Which-50 (blog)

As brands increasingly build internal statistical models and algorithms to tailor experiences, marketing cloud vendors are responding by building AI capabilities into their offerings.

The latest in line is Adobe which last week announced it is opening up its data science and algorithmic optimisation capabilities in Adobe Target, the personalisation engine of Adobe Marketing Cloud.

The company said brands will be able to insert their own data models and algorithms into Adobe Target to deliver the best experience to customers.

It will also add new capabilities in Adobe Target by integrating with Adobe Sensei, its AI and machine learning framework, to further enhance customer recommendations and targeting precision, optimise experiences and automate the delivery of personalised offers.

Among the key aspects of the announcement;

Consumer expectations have sky-rocketed to the point that hyper personalisation is no longer optional for brands, its imperative, said Aseem Chandra, VP, Adobe Experience Manager and Adobe Target.

Progressive brands are already developing proprietary algorithms. When integrated into Adobe Target, brands can combine their own expertise with the power of Adobes AI and machine learning tools to predict what customers want and deliver it before they ask, driving strong business value and brand loyalty.

The ability to bring in proprietary algorithms into a leading marketing platform is a first for the industry. Brands benefit from the ability to blend their industry expertise with Adobe Senseis powerful machine learning and AI capabilities in Adobe Target to deliver individualised customer experiences at massive scale.

For example, a financial services company that created its own algorithm to predict which customers are most likely to respond to an offer can insert that algorithm into Adobe Target to test live traffic against the model to deliver the best possible offer to each customer.

Adobe Target, part of Adobe Marketing Cloud, has leveraged AI and machine learning algorithms for over a decade and is used by major brands worldwide like AT&T, Lenovo, Marriott and Sprint. Highly personalised experiences are leveraged across online channels, including web, mobile, email and more.

With Adobe Experience Manager and Adobe Campaign marketers can seamlessly manage and deliver personalised content. Integration with Adobe Analytics Cloud and Adobe Advertising Cloud ensures that every interaction with customers is hyper personalised.

Adobe was recently named the only leader in The Forrester Wave: Digital Intelligence Platforms, Q2 2017 report, and received the highest scores possible in nine criteria, including behavioral targeting and online testing.

Previous post

Next post

Read the original post:

Adobe Extends AI Offering - Which-50 (blog)

Robots and AI are going to make social inequality even worse, says new report – The Verge

Most economists agree that advances in robotics and AI over the next few decades are likely to lead to significant job losses. But whats less often considered is how these changes could also impact social mobility. A new report from UK charity Sutton Trust explains the danger, noting that unless governments take action, the next wave of automation will dramatically increase inequality within societies, further entrenching the divide between rich and poor.

The are a number of reasons for this, say the reports authors, including the ability of richer individuals to re-train for new jobs; the rising importance of soft skills like communication and confidence; and the reduction in the number of jobs used as stepping stones into professional industries.

Traditionally, jobs like these have been a vehicle for social mobility.

For example, the demand for paralegals and similar professions is likely to be reduced over the coming years as artificial intelligence is trained to handle more administrative tasks. In the UK more than 350,000 paralegals, payroll managers, and bookkeepers could lose their jobs if automated systems can do the same work.

Traditionally, jobs like these have been a vehicle for social mobility, Sutton Trust research manager Carl Cullinane tells The Verge. Cullinane says that for individuals who werent able to attend university or get particular qualifications, semi-administrative jobs are often a way in to professional industries. But because they dont require more advanced skills theyre likely to be vulnerable to automation, he says.

Similarly, as automation reduces the need for administrative skills, other attributes will become more sought after in the workplace. These include so-called soft skills like confidence, motivation, communication, and resilience. Its long established that private schools put a lot of effort into making sure their pupils have those sorts of skills, says Cullinane. And these will become even more important in a crowded labor market.

Re-training for new jobs will also become a crucial skill, and its individuals from wealthier backgrounds that are more able to do so, says the report. This can already be seen in the disparity in terms of post-graduate education, with individuals in the UK with working class or poorer backgrounds far less likely to re-train after university.

The report, which was carried out by the Boston Consulting Group and published this Wednesday, looks specifically at the UK, where it says some 15 million jobs are at risk of automation. But the Sutton Trust says its findings are also relevant to other developed nations, particularly the US, where social mobility is a major problem.

Social mobility is already a big problem in America

One study in 2016 found that America has become significantly less conducive to social mobility over the past few decades. It is increasingly the case that no matter what your educational background is, where you start has become increasingly important for where you end, one of the studys authors, Michael D. Carr, told The Atlantic last year. Another report found that around half of 30-year-olds in the US earn less than their parents at the same age, compared to the 1970s, when almost 90 percent earned more.

Its important to note, though, that there is disagreement about how bad the impact of automation on the job market will be. Some reports have suggested that up to 50 percent of jobs in developed countries are at risk, while others point out that only specific tasks will be automated rather than whole professions. Economists also note that new categories of jobs are likely to be created, although exactly what, and how many, is impossible to accurately predict.

The Sutton Trust report also says that there is some reason to be optimistic about the coming wave of automation, particularly if governments can encourage people to train for STEM professions (those involving science, technology, engineering, and mathematics).

From a social mobility perspective there are two important things about the STEM sector, says Cullinane of the UK job market. Firstly, there doesnt seem to be a substantial gap in the income background of people taking STEM related subjects, and secondly, there isnt a resulting pay gap for those who come from different backgrounds. If the STEM sector is going to be the main source of growth over the medium to long term, thats a real opportunity to leverage social mobility there.

More here:

Robots and AI are going to make social inequality even worse, says new report - The Verge

Work-at-home AI surveillance is a move in the wrong direction – VentureBeat

While we have all been focused on facial recognition as the poster child for AI ethics, another concerning form of AI has quietly emerged and rapidly advanced during COVID-19: AI-enabled employee surveillance at home. Though we are justifiably worried about being watched while out in public, we are now increasingly being observed in our homes.

Surveillance of employees is hardly new. This started in earnest with scientific management of workers led by Frederick Taylor near the beginning of the 20th century, with time and motion studies to determinetheoptimal way to perform a job. Through this, business management focused on maximizing control over how people performed work. Application of this theory extends to the current day. A 2019reportfrom the U.C. Berkeley Labor Center states that algorithmic management introduces new forms of workplace control, where the technological regulation of workers performance is granular, scalable, and relentless.There is no slacking off while you are being watched.

Implementation of such surveillance had existed primarily in factory or warehouse settings, such as at Amazon. Recently, the Chinese Academy of Sciences reported that AI is being used on construction sites. These AI-based systems can offer benefits to employees by using computer vision to check whether employees are wearing appropriate safety gear, such as goggles and gloves, before giving them access to a danger area. However, there is also a more nefarious use case. The report said the AI system with facial recognition was hooked up to CCTV cameras and able to tell whether an employee was doing their job or loitering, smoking or using a smartphone.

Last year, Gartner surveyed 239 large corporations and found that more than 50% were using some type of nontraditional monitoring techniques of their workforce. These included analyzing the text of emails and social-media messages, scrutinizing who is meeting with whom, and gathering of biometric data.A subsequent Accenture surveyof C-suite executives reported that 62% of their organizations were leveraging new tools to collect data on their employees. One monitoring software vendor has noted that every aspect of business is becoming more data-driven, including the people side. Perhaps its true, as former Intel CEO Andy Grove famously stated, that only the paranoid survive.

With the onset of COVID-19 and many people working remotely, some employers have turned to productivity management software to keep track of what employees are doing while they work from home. These systems have purportedly seen a sharp increase in adoption since the pandemic began.

A rising tide of employer worry appears to be lifting all the ships. InterGuard, a leader in employee monitoring software claims three to four times growth in the companys customer base since COVID-19s spread in the U.S. Similarly, Hubstaff and Time Doctor claim interest has tripled. Teramind said 40% percent ofits current customers have addedmore user licenses to their plans. Another firm, aptly named Sneek, said sign-ups surged tenfold at the onset of the pandemic.

The software from these firms operates by tracking activities, whether it is time spent on the phone, the number of emails read and sent, or even the amount of time in front of the computer as determined by screen shot captures, webcam access, and number of keystrokes. Some algorithmically produce a productivity score for each employee that is shared with management.

Enaibleclaims its remote employee monitoring Trigger-Task-Time algorithm is a breakthrough at the intersection of leadership science and artificial intelligence. In an op-ed, the vendor said its software empowers leaders to lead more effectively by providing them with necessary information. In this respect, it appears we have advanced from Taylorism mostly in sophistication of the technology. A university research fellow shared a blunt assessment, saying these are technologies of discipline and domination they are ways of exerting power over employees.

While the ever-present push for productivity is understandable on one level managers have a right to make reasonable requests of workers about their productivity and to minimize cyber-loafing such intense observation opens yet another front in the AI-ethics conversation, especially concerns regarding the amount of information collected by monitoring software, how it might be used, and the potential for inherent bias in the algorithms that would influence results.

Monitoring of employees is legal in the U.S. down to the keystroke. based on the Electronic Communications Privacy Act of 1986. But were now living in an age where monitoring those employees means monitoring them at home which is supposed to be a private environment.

In the 1921 dystopian Russian novelWe that may have influenced the later1984, all of the citizens live in apartments made entirely of glass to enable perfect surveillance by the authorities. Today we already have AI-powered digital assistants such as Google Home and Amazon Alexa that can monitor what is said at home, though allegedly only after they hear the wake word. Nevertheless, there are numerous examples of these devices listening and recording other conversations and images, prompting privacy concerns. With home monitoring of employees, we have effectively turned our work computers into another device with eyes and ears without requiring a wake word adding to home surveillance. These tools can track not only our work interactions but what we say and do on or near our devices. Our at-home lifestyles and non-work conversations could be observed and translated into data that risk managers such as insurers or credit issuers might find illuminating, should employers share this content.

Perhaps work-from-home surveillance is now a fait accompli, an intrinsic part of the modern Information Age that risks the right to privacy of employees within their homes, as well as the office. Already there are employee surveillance product reviews in mainstream media, normalizing the monitoring practice. Nevertheless, in a world where boundaries between work and home have already blurred, the ethics of using AI technologies to monitor employees every move in the guise of productivity enhancement could be a step too far and another topic for potential regulation. The constant AI-powered surveillance risks turning the human workforce into a robotic one.

Gary Grossman is the Senior VP of Technology Practice atEdelmanand Global Lead of the Edelman AI Center of Excellence.

More:

Work-at-home AI surveillance is a move in the wrong direction - VentureBeat

The White House wants to spend hundreds of millions more on AI research – MIT Technology Review

The news: The White House is pumping hundreds of millions more dollars into artificial-intelligence research. In budget plans announced on Monday, the administration bumped funding for AI research at the Defense Advanced Research Projects Agency (DARPA) from $50 million to $249 million and at the National Science Foundation from $500 million to $850 million. Other departments, including the Department of Energy and the Department of Agriculture, are also getting a boost to their funding for AI.

Why it matters: Many believe that AI is crucial for national security. Worried that the US risks falling behind China in the race to build next-gen technologies, security experts have pushed the Trump administration to increase its funding.

Public spending: For now the money will mostly flow to DARPA and the NSF. But $50 million of the NSFs budget has been allocated to education and job training, especially in community colleges, historically black colleges and universities, and minority-serving institutions. The White House says it also plans to double funding of AI research for purposes other than defense by 2022.

See the article here:

The White House wants to spend hundreds of millions more on AI research - MIT Technology Review

Joe Rogan and Steve Jobs Have a 20-Minute Chat in AI-Powered Podcast – HYPEBEAST

Artificial intelligence has allowed us to simulate all kinds of situations through computer systems. Some of its main applications are language processing and speech recognition, and now, through play.ht and podcast.ai were actually able to see how far the technology has come by experiencing a conversation with someone who is not even on Earth anymore.

In an entirely AI-generated podcast, podcast.ai has created a full interview between Joe Rogan and Steve Jobs. While the first bit of the podcast is clunky with weird pauses and awkward laughing, it does start to move into real conversation touching on faith, tech companies, drugs, and at one point the AI-generated Jobs uses the analogy of a car where you have to buy all four wheels separately to Adobes services.

The crazy thing is some parts begin to sound believable, and actually keep you listening as you start to make a connection to what they are saying. This could be enforced by the prevalence of Joe Rogan in the current podcast sphere, and the general curiosity of witnessing what Steve Jobs would have said if the two ever did meet. Have a listen below to experience the AI podcast for yourself.

In other tech news, unopened first-generation apple iPhone from 2007 auctions for $39,000 USD.

See the article here:

Joe Rogan and Steve Jobs Have a 20-Minute Chat in AI-Powered Podcast - HYPEBEAST

The next phase of AI is generative – CIO Dive

Enterprises have long sought AI for its ability to supercharge a workforce, picking up slack through automated tasks and a cost-effective option for repetitive labor, compared to humans.

The next act in enterprise AI sees the technology becoming a standalone maker. The technology generates synthetic data to train its own models or identify groundbreaking products as solutions mature and adoption widens, as showcased in Gartner's Hype Cycle for Emerging Technologies 2021 report, published Monday.

Called "Generative AI,", the technology is set to reach the plateau of productivity in the next two to five years. Commercial implementations of generative AI are already at play in the enterprise and, as the technology advances through the hype cycle, non-viable use cases will fade, according to Brian Burke, research VP at Gartner.

Generative AI works by using algorithms to create a "realistic, novel version of whatever they've been trained on," Burke said. Algorithms can identify new materials with specific properties and technologies that generate synthetic data to augment research, among other use cases.

An early implementation for generative AI technology let companies identify marketing content with a higher success rate. Today, capabilities have evolved and AI can produce its own data and generate results from it in critical spaces such as the pharmaceutical industry.

During the pandemic, researchers used AI to augment data and help identify antiviral compounds and therapeutic research for treating COVID-19. The technology helped generate more data to support algorithms, given the novelty of the disease and HIPAA regulations.

Using AI to create can be a big differentiator for companies, said Rodrigo Liang, co-founder and CEO of SambaNova Systems. Competition can leave organizations no choice but to catch up with markets and adopt generative AI.

Despite the evolution of AI, most organizations continue to struggle with adoption.

Whether it's in-house AI or a vendor-made solution, technologies that fail to be adopted by the whole organization amount to wasted resources. AI maturity levels vary in the enterprise, and just 20% of organizations are at the highest levels of AI adoption and deployment, according to Cognizant.

Pressure from competitors and potential financial upside is making companies double down on AI financially, too.

The number of companies with AI budgets ranging from $500,000 to $5 million rose 55% year over year, according to Appen's State of AI and Machine Learning report published in June.

AI use will shift for the enterprise as it moves away from static models to more dynamic technologies.

In the past, AI models trained on a specific outcome could learn to perform a task but not necessarily get better over time, Burke said. "What we've seen evolve in terms of AI is that models are becoming more dynamic, and the data that supports those models becoming more dynamic."

Executives also struggle to account for the ethical dimensions of AI. Businesses are more likely to check an algorithm for unexpected outcomes than their fairness or bias implications, according to the AI Adoption in the Enterprise report published by O'Reilly.

"Machine learning, data science, algorithmic approaches in general, and, yes, AI, have enormous potential to drive innovation," said Christian Beedgen, co-founder and CTO, Sumo Logic, in an email. "But like with all innovation, what really matters is how humans apply this potential."

Companies have turned to explainable AI as a way to contend with the decisions an algorithm makes, and the ethical implications of those decisions.

"As AI continues to seep into our everyday lives, it is up to humans to deeply consider the ethics behind every program they create and whether or not the ends justify the means," said Beedgen.

See the original post:

The next phase of AI is generative - CIO Dive

76% of tech leaders will increase hiring for AI, cognitive solutions, report says – TechRepublic

The growth of cognitive technologies such as artificial intelligence (AI) will lead more than 75% of tech leaders to increase hiring in IT to manage deployments, according to a new report from KPMG. In addition to hiring in IT, 64% of tech leaders said they'd increase hiring in middle management, 62% in customer service, 60% in sales, 50+% in sales, and 43% in senior management, the report said.

"The tech CEOs' commitment to hire across the board shows the strategic value they see in cognitive technologies, and they are building the organizational structure their company will require to execute their strategy," Tim Zanni, global and US chair of KPMG's Technology, Media and Telecommunications practice, said in a press release.

By the year 2021, spending on AI and cognitive technologies is expected to hit $46 billion, according to IDC data. These technologies are making waves in the form of mobile assistants and chat bots, but they are poised to further disrupt even more industries.

SEE: The Machine Learning and Artificial Intelligence Bundle (TechRepublic Academy)

According to 28% of respondents, AI will lead to better productivity and improved efficiency. Some 16% said it could lead to cost reductions, 14% said increased profitability, and 10% predicted faster innovation cycles. Better customer loyalty and faster time to market were also both predicted as outcomes of AI's growth.

Despite their impact on hiring, cognitive technologies were only the third most impactful tech trend noted in the report, cited by 10% of respondents. The Internet of Things (IoT) took first place with 20%, and robotics came in second with 11%.

A quarter of respondents felt that IoT would lead to improved business efficiencies and higher productivity. IoT would also lead to faster innovation cycles, 19% of respondents said, and 13% said it could bring cost reductions too.

"As we evolve as a networked society, IoT will transform the way we interact with technology. From an enterprise perspective this evolution will require a new framework to manage the opportunities and risk," Peter Mercieca, a management consulting leader at KPMG, said in the report.

Some 36% of respondents reported that robotics would lead to improved effectiveness, and the technology also ranked high in speeding innovation cycles. The report noted that robotics have been common in manufacturing for a long time, but are changing to become more collaborative.

These technologies are making a big splash in the enterprise, and their pace of disruption will likely quicken, the report said. Company leaders are recommended to revisit their strategies for new tech investments, including investment and M&A strategies, the report said.

Image: iStockphoto/agsandrew

More:

76% of tech leaders will increase hiring for AI, cognitive solutions, report says - TechRepublic

The Top 100 AI Startups Out There Now, and What They’re Working On – Singularity Hub

New drug therapies for a range of chronic diseases. Defenses against various cyber attacks. Technologies to make cities work smarter. Weather and wildfire forecasts that boost safety and reduce risk. And commercial efforts to monetize so-called deepfakes.

What do all these disparate efforts have in common? Theyre some of the solutions that the worlds most promising artificial intelligence startups are pursuing.

Data research firm CB Insights released its much-anticipated fourth annual list of the top 100 AI startups earlier this month. The New York-based company has become one of the go-to sources for emerging technology trends, especially in the startup scene.

About 10 years ago, it developed its own algorithm to assess the health of private companies using publicly-available information and non-traditional signals (think social media sentiment, for example) thanks to more than $1 million in grants from the National Science Foundation.

It uses that algorithm-generated data from what it calls a companys Mosaic scorepulling together information on market trends, money, and momentumalong with other details ranging from patent activity to the latest news analysis to identify the best of the best.

Our final list of companies is a mix of startups at various stages of R&D and product commercialization, said Deepashri Varadharajanis, a lead analyst at CB Insights, during a recent presentation on the most prominent trends among the 2020 AI 100 startups.

About 10 companies on the list are among the worlds most valuable AI startups. For instance, theres San Francisco-based Faire, which has raised at least $266 million since it was founded just three years ago. The company offers a wholesale marketplace that uses machine learning to match local retailers with goods that are predicted to sell well in their specific location.

Another startup valued at more than $1 billion, referred to as a unicorn in venture capital speak, is Butterfly Network, a company on the East Coast that has figured out a way to turn a smartphone phone into an ultrasound machine. Backed by $350 million in private investments, Butterfly Network uses AI to power the platforms diagnostics. A more modestly funded San Francisco startup called Eko is doing something similar for stethoscopes.

In fact, there are more than a dozen AI healthcare startups on this years AI 100 list, representing the most companies of any industry on the list. In total, investors poured about $4 billion into AI healthcare startups last year, according to CB Insights, out of a record $26.6 billion raised by all private AI companies in 2019. Since 2014, more than 4,300 AI startups in 80 countries have raised about $83 billion.

One of the most intensive areas remains drug discovery, where companies unleash algorithms to screen potential drug candidates at an unprecedented speed and breadth that was impossible just a few years ago. It has led to the discovery of a new antibiotic to fight superbugs. Theres even a chance AI could help fight the coronavirus pandemic.

There are several AI drug discovery startups among the AI 100: San Francisco-based Atomwise claims its deep convolutional neural network, AtomNet, screens more than 100 million compounds each day. Cyclica is an AI drug discovery company in Toronto that just announced it would apply its platform to identify and develop novel cannabinoid-inspired drugs for neuropsychiatric conditions such as bipolar disorder and anxiety.

And then theres OWKIN out of New York City, a startup that uses a type of machine learning called federated learning. Backed by Google, the companys AI platform helps train algorithms without sharing the necessary patient data required to provide the sort of valuable insights researchers need for designing new drugs or even selecting the right populations for clinical trials.

Privacy and data security are the focus of a number of AI cybersecurity startups, as hackers attempt to leverage artificial intelligence to launch sophisticated attacks while also trying to fool the AI-powered systems rapidly coming online.

I think this is an interesting field because its a bit of a cat and mouse game, noted Varadharajanis. As your cyber defenses get smarter, your cyber attacks get even smarter, and so its a constant game of whos going to match the other in terms of tech capabilities.

Few AI cybersecurity startups match Silicon Valley-based SentinelOne in terms of private capital. The company has raised more than $400 million, with a valuation of $1.1 billion following a $200 million Series E earlier this year. The companys platform automates whats called endpoint security, referring to laptops, phones, and other devices at the end of a centralized network.

Fellow AI 100 cybersecurity companies include Blue Hexagon, which protects the edge of the network against malware, and Abnormal Security, which stops targeted email attacks, both out of San Francisco. Just down the coast in Los Angeles is Obsidian Security, a startup offering cybersecurity for cloud services.

Deepfakes of videos and other types of AI-manipulated media where faces or voices are synthesized in order to fool viewers or listeners has been a different type of ongoing cybersecurity risk. However, some firms are swapping malicious intent for benign marketing and entertainment purposes.

Now anyone can be a supermodel thanks to Superpersonal, a London-based AI startup that has figured out a way to seamlessly swap a users face onto a fashionista modeling the latest threads on the catwalk. The most obvious use case is for shoppers to see how they will look in a particular outfit before taking the plunge on a plunging neckline.

Another British company called Synthesia helps users create videos where a talking head will deliver a customized speech or even talk in a different language. The startups claim to fame was releasing a campaign video for the NGO Malaria Must Die showing soccer star David Becham speak in nine different languages.

Theres also a Seattle-based company, Wellsaid Labs, which uses AI to produce voice-over narration where users can choose from a library of digital voices with human pitch, emphasis, and intonation. Because every narrator sounds just a little bit smarter with a British accent.

Speaking of smarter: A handful of AI 100 startups are helping create the smart city of the future, where a digital web of sensors, devices, and cloud-based analytics ensure that nobody is ever stuck in traffic again or without an umbrella at the wrong time. At least thats the dream.

A couple of them are directly connected to Google subsidiary Sidewalk Labs, which focuses on tech solutions to improve urban design. A company called Replica was spun out just last year. Its sort of SimCity for urban planning. The San Francisco startup uses location data from mobile phones to understand how people behave and travel throughout a typical day in the city. Those insights can then help city governments, for example, make better decisions about infrastructure development.

Denver-area startup AMP Robotics gets into the nitty gritty details of recycling by training robots on how to recycle trash, since humans have largely failed to do the job. The U.S. Environmental Protection Agency estimates that only about 30 percent of waste is recycled.

Some people might complain that weather forecasters dont even do that well when trying to predict the weather. An Israeli AI startup, ClimaCell, claims it can forecast rain block by block. While the company taps the usual satellite and ground-based sources to create weather models, it has developed algorithms to analyze how precipitation and other conditions affect signals in cellular networks. By analyzing changes in microwave signals between cellular towers, the platform can predict the type and intensity of the precipitation down to street level.

And those are just some of the highlights of what some of the worlds most promising AI startups are doing.

You have companies optimizing mining operations, warehouse logistics, insurance, workflows, and even working on bringing AI solutions to designing printed circuit boards, Varadharajanis said. So a lot of creative ways in which companies are applying AI to solve different issues in different industries.

Image Credit: Butterfly Network

See the original post:

The Top 100 AI Startups Out There Now, and What They're Working On - Singularity Hub

AI Storm Brewing – SemiEngineering

The acceleration of artificial intelligence will have big social and business implications.

AI is coming. Now what?

The answer isnt clear, because after decades of research and development, AI is finally starting to become a force to reckon with. The proof is in the M&A activity underway right now. Big companies are willing to pay huge sums to get out in front of this shift.

Here is a list of just some of the AI acquisitions announced or completed over the past few years:

Microsoft: Maluuba (natural language processing/reinforcement learning), Netbreeze (social media monitoring). Google: DeepMind Technologies (famous for beating the world Go champion), Moodstock (image recognition), Clever Sense (social recommendations) and Api.ai (natural language processing). Facebook: Face.com (facial recognition). Intel: Itseez (machine vision), Nervana Systems (machine learning) and Movidius (machine vision). Apple: Turi and Tuplejump (both machine learning). Twitter: Magic Pony, Wetlab and Madbits (all machine learning). Salesforce: MetaMind (natural language processing) and PredictionIO (machine learning) GE: Bit Stew (analytics) and Wise.io (machine learning).

The list goes on and on. AI has turned into an arms race among big companies, which are pouring billions of dollars into this field after a lull that lasted nearly a quarter of a century. The last big explosion in AI research was in the 1980s and early 1990s, when most companies concluded they did not have the technology resourcescompute power, memory and throughputto develop effective AI solutions.

IBM was the big holdout, quietly developing Watson as a for-lease compute platform and showcasing it on Jeopardy (it won) and at the University of North Carolinas UNC Lineberger cancer treatment center, where Watson proved its mettle with a team of trained oncologists. Others are racing to catch up.

Put in perspective, there are several trends that are emerging. First, while AI is not going to take over the world like HAL in the movie classic 2001: A Space Odyssey, it will be a disruptive force that can eliminate high-paying as well as low-paying jobs. The more specialized and higher-paid, the greater the ROI. And as eSilicon Chairman Seth Neiman points out in an interview with Semiconductor Engineering, this can happen with breathtaking speed.

Second, as companies begin understanding how AI can be used, it will become obvious there is no single AI machine or architecture. When the IoT term was first introduced (Kevin Ashton, co-founder of the Auto-ID Center believes he first coined the term in a 1999 presentation, although it was Cisco that really made the term a household name) it was considered a single entity. It is now viewed as a general term that encompasses many different approaches and vertical market segments, each with its own set of architectures that may or may not interact with other market segments. AI will follow the same evolutionary path, splintering into architectures that are tailored for multiple markets.

And third, while the tech industry is still trying to wrap its arms around what this will mean, its clear that AI is here to stay this time. The investments by both companies and governments in this field will keep this part of the market well-funded for years to come.

However, whats not clear yet is how this round of technology will mesh with society. In the past, most technology that was developed was viewed as helpful for a broad range of people. Rather than replacing people, it freed them from mundane tasks to do more creative tasks or to specialize further. Unlike previous technology booms, AI has the potential to displace people at all levelstruck drivers, business consultants, lawyers, accountants, medical specialists with a dozen years of schooling.

Rather than sitting back and waiting for standards, its imperative that tech groups at every level get out in front of this shift and help develop policies that will guide future development. In the tech industry there is always a level of hype surrounding architectural changes, but this is hardly business as usual. Done right, AI can be a big opportunity for years to come, driving continued advances in both semiconductor technology and software. Done wrong, it can have a devastating impact on jobsand how people use and view technology for years to come.

Related Stories What Does AI Really Mean? eSilicons chairman looks at technology advances, its limitations, and the social implications of artificial intelligenceand how it will change our world. Happy 25th Birthday, HAL! AI has come a long way since HAL became operational. Neural Net Computing Explodes Deep-pocket companies begin customizing this approach for specific applicationsand spend huge amounts of money to acquire startups.

Read the original:

AI Storm Brewing - SemiEngineering

Are Talking Speakers the First AI Bubble? – Investopedia


Investopedia
Are Talking Speakers the First AI Bubble?
Investopedia
Echo speaker, powered by its voice-activated virtual assistant Alexa, all of the tech heavy hitters are getting into the market with Apple Inc.'s (AAPL · Add To Watchlist. AAPL. Created with Highstock 4.2.6. ) HomePod coming out this fall. While lots ...

Visit link:

Are Talking Speakers the First AI Bubble? - Investopedia

Conversational AI and the road ahead – TechCrunch

Katherine Bailey Crunch Network Contributor

Katherine Bailey is principal data scientist at Acquia.

More posts by this contributor:

In recent years, weve seen an increasing number of so-called intelligent digital assistants being introduced on various devices. At the recent CES, both Hyundai and Toyota announced new in-car assistants. Although the technology behind these applications keeps getting better, theres still a tendency for people to be disappointed by their capabilities the expectation of intelligence is not being met.

The city councilmen refused the demonstrators a permit because they feared violence.

What does the word they refer to here the councilmen or the demonstrators? What if instead of feared we wrote advocated? This changes what we understand by the word they. Why? It is clear to us that councilmen are more likely to fear violence, whereas demonstrators are more likely to advocate it. This information, which is vital for disambiguating the pronoun they, is not in the text itself, which makes these problems extremely difficult for AI programs.

The first ever Winograd Schema Challenge was held last July, and the winning algorithm achieved a score on the challenge that was a bit better than random.

Theres a technique for representing the words of a language thats proving incredibly useful in many NLP tasks, such as sentiment analysis and machine translation. The representations are known as word embeddings, and they are mathematical representations of words that are trained from millions of examples of word usage in order to capture meaning. This is done by capturing relationships between words. To use a classic example, a good set of representations would capture the relationship king is to man as queen is to woman by ensuring that a particular mathematical relationship holds between the respective vectors (specifically, king man + woman = queen).

Such vectorized representations are at the heart of Googles new translation system, although they are representations of entire sentences, not just words. The new system reduces translation errors by more than 55-85 percent on several major language pairs and can perform zero-shot translation: translation between language pairs for which no training data exists.

Given all this, it may seem surprising to hear Oren Etzioni, a leading AI researcher with a particular focus on NLP, quip: When AI cant determine what it refers to in a sentence, its hard to believe that it will take over the world.

So, AI can perform adequate translations between language pairs it was never trained on but it cant determine what it refers to? How can this be?

When hearing about how vectorized representations of words and sentences work, it can be tempting to think they really are capturing meaning in the sense that there is some understanding happening. But this would be a mistake. The representations are derived from examples of language use. Our use of language is driven by meaning. Therefore, the derived representations naturally reflect that meaning. But the AI systems learning such representations have no direct access to actual meaning.

For the purposes of many NLP tasks, lack of access to actual meaning is not a serious problem.

Not understanding what it refers to in a sentence is not going to have an enormous effect on translation accuracy it might mean il is used instead of elle when translating into French, but thats probably not a big deal.

However, problems arise when trying to create a conversational AI:

Screenshot from the sample bot you can create with IBMs conversation service following this tutorial.

Understanding the referents of pronouns is a pretty important skill for holding conversations. As stated above, the training data used to train AIs that perform NLP tasks does not include the necessary information for disambiguating these words. That information comes from knowledge about the world. Whether its necessary to actually act as an embodied entity in the world or simply have vast amounts of common sense knowledge programmed in,to glean the necessary information is still an open question. Perhaps its something in-between.

Terry Winograds early Natural Language Understanding program SHRDLU restricted itself to statements about a world made up of blocks. By Ksloniewski (Own work) CC BY-SA 4.0, via Wikimedia Commons

But there are ways of enhancing such conversational AI experiences even without solving natural language understanding (which may take decades, or longer). The image above showing a bot not understanding now turn them back on when the immediately prior request was turn off the windshield wipers demonstrates how disappointing it is when a totally unambiguous pronoun cannot be understood. That is definitely solvable with todays technology.

Read more here:

Conversational AI and the road ahead - TechCrunch

Google is helping fund AI news writers in the UK and Ireland – The Verge

Google is giving the Press Association news agency a grant of 706,000 ($806,000) to start writing stories with the help of artificial intelligence. The money is coming out of the tech giants Digital News Initiative fund, which supports digital journalism in Europe. The PA supplies news stories to media outlets all over the UK and Ireland, and will be working with a startup named Urbs Media to produce 30,000 local stories a month with the help of AI.

The editor-in-chief of the Press Association, Peter Clifton, explained to The Guardian that the AI articles will be the product of collaboration with human journalists. Writers will create detailed story templates for topics like crime, health, and unemployment, and Urbs Medias Radar tool (it stands for Reporters And Data And Robots) will fill in the blanks and helping localize each article. This sort of workflow has been used by media outlets for years, with the Los Angeles Times using AI to write news stories about earthquakes since 2014.

Skilled human journalists will still be vital in the process, said Clifton, but Radar allows us to harness artificial intelligence to scale up to a volume of local stories that would be impossible to provide manually.

The money from Google will also be used to make tools for scraping information from public databases in the UK, like those generated by local councils and the National Health Service. The Radar software will also auto-generate graphics for stories, as well as add relevant videos and pictures. The software will start being used from the beginning of next year.

Some reporters in the UK, though, are skeptical about the new scheme. Tim Dawson, president of the National Union of Journalists, told The Guardian: The real problem in the media is too little bona fide reporting. I dont believe that computer whizzbangery is going to replace that. What Im worried about in my capacity as president of the NUJ is something that ends up with third-rate stories which look as if they are something exciting, but are computer-generated so [news organizations] can get rid of even more reporters.

Visit link:

Google is helping fund AI news writers in the UK and Ireland - The Verge

The Dark Side of Big Techs Funding for AI Research – WIRED

Last week, prominent Google artificial intelligence researcher Timnit Gebru said she was fired by the company after managers asked her to retract or withdraw her name from a research paper, and she objected. Google maintains that she resigned, and Alphabet CEO Sundar Pichai said in a company memo on Wednesday that he would investigate what happened.

The episode is a pointed reminder of tech companies influence and power over their field. AI underpins lucrative products like Googles search engine and Amazons virtual assistant Alexa. Big companies pump out influential research papers, fund academic conferences, compete to hire top researchers, and own the data centers required for large-scale AI experiments. A recent study found that the majority of tenure-track faculty at four prominent universities that disclose funding sources had received backing from Big Tech.

Ben Recht, an associate professor at University of California, Berkeley, who has spent time at Google as visiting faculty, says his fellow researchers sometimes forget that companies interest doesnt stem only from a love of science. Corporate research is amazing, and there have been amazing things that came out of the Bell Labs and PARC and Google, he says. But its weird to pretend that academic research and corporate research are the same.

Ali Alkhatib, a research fellow at University of San Franciscos Center for Applied Data Ethics, says the questions raised by Googles treatment of Gebru risk undermining all of the companys research. It feels precarious to cite because there may be things behind the scenes, which they werent able to talk about, that we learn about later, he says.

At any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest.

Meredith Whittaker, faculty director, AI Now Institute

Alkhatib, who previously worked in Microsofts research division, says he understands that corporate research comes with constraints. But he would like to see Google make visible changes to win back trust from researchers inside and outside the company, perhaps by insulating its research group from other parts of Google.

The paper that led to Gebrus exit from Google highlighted ethical questions raised by AI technology that works with language. Googles head of research, Jeff Dean, said in a statement last week that it didnt meet our bar for publication.

Gebru has said managers may have seen the work as threatening to Googles business interests, or an excuse to remove her for criticizing the lack of diversity in the companys AI group. Other Google researchers have said publicly that Google appears to have used its internal research review process to punish her. More than 2,300 Google employees, including many AI researchers, have signed an open letter demanding the company establish clear guidelines on how research will be handled.

Meredith Whittaker, faculty director at New York Universitys AI Now institute, says what happened to Gebru is a reminder that, although companies like Google encourage researchers to consider themselves independent scholars, corporations prioritize the bottom line above academic norms. Its easy to forget, but at any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest, she says.

Whittaker worked at Google for 13 years but left in 2019, saying the company had retaliated against her for organizing a walkout over sexual harassment and to undermine her work raising ethical concerns about AI. She helped organize employee protests against an AI contract with the Pentagon that the company ultimately abandoned, although it has taken up other defense contracts.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

Machine learning was an obscure dimension of academia until around 2012, when Google and other tech companies became intensely interested in breakthroughs that made computers much better at recognizing speech and images.

The search and ads company, quickly followed by rivals such as Facebook, hired and acquired leading academics, and urged them to keep publishing papers in between work on company systems. Even traditionally tight-lipped Apple pledged to become more open with its research, in a bid to lure AI talent. Papers with corporate authors and attendees with corporate badges flooded the conferences that are the fields main publication venues.

Read the original here:

The Dark Side of Big Techs Funding for AI Research - WIRED

Podcast: Doctors have to think about sex; AI text generators spread ‘fake news’? Coffee can indeed make you poop – Genetic Literacy Project

An ER physician says doctors have to consider biological sex to properly care for their patients. Coffee can send some people to the bathroombut probably not for the reason weve been led to believe. AI-powered text generators can write realistic news stories, fueling concerns that the technology will encourage the spread of misinformation online.

Podcast: Play in new window | Download

Subscribe: Apple Podcasts | Android |

Join geneticist Kevin Folta and GLP editor Cameron English on this episode of Science Facts and Fallacies as they break down these latest news stories:

When physicians dont properly consider biological sex, patients are prescribed incorrect treatments and suffer entirely preventable consequences, says Alyson McGregor, an Associate Professor of Emergency Medicine at The Warren Alpert Medical School of Brown University.

The problem runs through our health care system, affecting millions of patients, and stems from the fact that doctors often prescribe multiple medications to female patients without recognizing female sex as an independent risk factor for serious drug interactions, McGregor notes. This occurs because women are more likely to have multiple physicians prescribing medications, each possibly unaware of all the relevant drugs unless the patient reports them.

Is there a way to correct the situation and prevent needless suffering?

Scientists and educators spend a considerable amount of time combating the spread of misinformation online, and their jobs may get much harder in the coming years as text generators powered by artificial intelligence become more widely used. These applications can perform word association, answer questions and, perhaps most importantly, comprehend related concepts.

The latest iteration of the technology developed by OpenAI was able to write 200-500 word sample news articles that were difficult to distinguish from news reports written by humans. There are some inherent risks in the technology, but AI-powered text generators are also poised to do a lot of good.

Its a common joke youve probably heard in your favorite movie or TV show: that first morning cup of coffee makes you poop. While it may not be as universal as implied by pop culture, this reaction to coffee is real. Caffeine might be one of the culprits. However, multiple (albeit small) studies show that coffee stimulates several physiological responses that can send you to the bathroom in short order.

Subscribe to the Science Facts and Fallacies Podcast on iTunes and Spotify.

Kevin M. Folta is a professor in the Horticultural Sciences Department at the University of Florida. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the GLPs managing editor. BIO. Follow him on Twitter @camjenglish

Follow this link:

Podcast: Doctors have to think about sex; AI text generators spread 'fake news'? Coffee can indeed make you poop - Genetic Literacy Project

Artificially Intelligent? The Pros and Cons of Using AI Content on Your Law Firms Website – JD Supra

Artificial intelligence (AI) is powerfuland the use of it for content generation is on the rise. In fact, some experts estimate that as much as 90 percent of online content may be generated by AI algorithms by 2026.

Many of the popular AI content generators produce well-written, informative content. But is it the right choice for your firm? Before you decide, lets consider the pros and cons of using this unique sort of copy with your digital marketing.

This article explains how AI content generators works, the pros and cons of AI-generated content, and a few tips for utilizing AI content in your digital marketing workflow.

Consumer-facing artificial intelligence tools are pretty straightforward, as far as the consumer is concerned. You provide some inputs, and the machine provides some outputs.

Heres how it works with content writing. You generally provide the AI generator with a topic and keywords. You can usually select the format youd like the output to take, such as a blog post or teaser copy. Then, its as simple as clicking GO.

The content generator will scrape the web and draft copy for your needs. Some tools can take existing content and rewrite it, which can make content marketing a lot easier.

Not all AI content generators cost money, but youll need to pay something to access the better toolsor to produce a lot of content.

If youre excited about the possibilities, great! There are some significant benefits to AI content generators.

Here are a few pros of AI content tools:

To sum up, AI content tools can quickly produce natural-sounding copy at a fraction of the cost of paying a real copywriter.

There are several important drawbacks to consider with AI-generated content. Speed and cost arent everything when it comes to content generation.

Here are several cons that come with using AI content tools:

AI tools can be hit-or-miss when it comes to empathy and accuracy. Law firms should be very careful when publishing this type of content. There are also serious SEO concerns with using AI content.

Overall, its clear that AI-generated content can provide value. The question is how to best incorporate AI content into your digital marketing efforts.

Here are a few best practices if you choose to use AI-generated content.

All AI-generated content should be reviewed by a real human being prior to publication.We recommend hiring a legal professional to review and edit AI copy. A copywriter can help smooth the rough edges, too. Because the content is already written, the hourly rate youll pay these professionals should be minimal.

Dont Use AI-generated content on your website. This type of tool should be a last resort. If you do use machine-generated copy on your website, make sure to block it from being crawled to avoid search engine penalties. Your website developer can advise on the best way to do this.

Do not hire an agency that brags about AI content as a core strategy.SEO and web development companies should be very aware of the risks that come with using AI content. If they suggest AI-generated content, ask them how they plan to protect your firm against search engine penaltiesand dont work with them if they dont have a good answer.

Our current position is that AI-generated content can be helpful for short blurbs, such as newsletters to clients. All AI content should only be deployed with human oversight.

We recommend against using AI-generated content for website copy. If it must be used, its important to work with a developer or agency that understands how to communicate with search engines so you arent penalized for using AI tools.

[View source.]

Read the original post:

Artificially Intelligent? The Pros and Cons of Using AI Content on Your Law Firms Website - JD Supra

Tinder wants AI to set you up on a date – BBC News – BBC News


BBC News

View original post here:

Tinder wants AI to set you up on a date - BBC News - BBC News

The Android Of Self-Driving Cars Built A 100,000X Cheaper Way To Train AI For Multiple Trillion-Dollar Markets – Forbes

Level 5 self-driving means autonomous cars can drive themselves anywhere, at any time, in any ... [+] conditions.

How do you beat Tesla, Google, Uber and the entire multi-trillion dollar automotive industry with massive brands like Toyota, General Motors, and Volkswagen to a full self-driving car? Just maybe, by finding a way to train your AI systems that is 100,000 times cheaper.

Its called Deep Teaching.

Perhaps not surprisingly, it works by taking human effort out of the equation.

And Helm.ai says its the key to unlocking autonomous driving. Including cars driving themselves on roads theyve never seen ... using just one camera.

Our Deep Teaching technology trains without human annotation or simulation, Helm.ai CEO Vladislav Voroninski told me recently on the TechFirst podcast. And its on a similar level of effectiveness as supervised learning, which allows us to actually achieve a higher levels of accuracy as well as generalization ... than the traditional methods.

Artificial intelligence runs on data the way an army marches on its stomach. Most self-driving car projects use annotated data, Voroninski says.

That means thousands upon thousands of images and videos that a human has viewed and labeled, perhaps identifying things like lane or human or truck. Labeling images costs at least dollars per image, which means the cost of annotation becomes the bottleneck.

The cost of annotation is about a hundred thousand X more than the cost of simply processing an image through a GPU, Voroninski says.

And that means that even with budgets of tens of billions of dollars, youre going to be challenged to drive enough training data through your AI to make it smart enough to approach level five autonomy: full capability to drive anywhere at any time in any conditions.

The other problem with level five?

You pretty much have to invent general artificial intelligence to make it happen.

If you mean Level five like literally going anywhere in a sense of being able to go off-roading in a jungle or driving on the moon ... then I think that an AI system that can do that would be on par with a human in many ways, Voroninski told me. And potentially could be AI complete, meaning that it could be as hard as solving general intelligence.

Fortunately, a high-functioning level four self-driving system is pretty much all we need: the ability to drive most places at most times in most conditions.

That will unlock our ability to get driven: to reclaim thousands of hours spent in cars for leisure and work. That will also unlock fractional car ownership and much more cost-effective ride-sharing, plus a host of other applications.

And multiple other trillion dollar markets, including autonomous robots, delivery robots, and more.

So how does deep teaching work?

Deep teaching uses compressive sensing and sophisticated priors to scale limited information into deep insights. Its essentially a shortcut to a form of intelligence. Similar technologies helped us drop the cost of mapping the human genome massively, discover the structure of DNA, and have been used to speed up MRI (magnetic resonance imaging) by a factor of ten.

Science is full of these kinds of reconstruction problems where you observe information, indirect information about some object of interest, and you want to recover the structure of that object from that indirect information, Voroninski says. Compressive sensing is an area of research which solves these reconstruction problems with a lot less data than people previously thought possible, by incorporating certain structural assumptions about the object of interest into their construction process.

Those structural assumptions include priors, kind of a priori assumptions that a system can take for granted about the nature of reality.

One example: object permanence.

A car doesnt just stop existing when it passes behind a truck, but an self-driving AI system without knowledge of this particular prior one that human babies learn in their infancy wouldnt necessarily know that. Supplying these priors speeds up training, and that makes autonomous driving systems smarter.

There are about 20 similar concepts that ours brains use to infer the state of the world according to our eyes, Voroninski says. Supplying enough of these repeatedly useful concepts is critical to deep teaching.

Tesla is about 5 years ahead of it's competition, according to automotive industry consultant Katrin ... [+] Zimmermann.

Thats enabled Helm.ais system to drive Page Mill Road, near Skyline Boulevard in the Bay area, with just one camera and one GPU. Its a curvy, steep mountain road that the system wasnt trained on it received no data or images from that route, Voroninski says but was able to navigate with ease and at reasonable speed.

And frankly, thats mostly what we need.

We dont need a system that can off-road or work in the worst blizzard-and-ice conditions. For effective and useful self-driving, we need a system that can handle 99% of roads and conditions, which probably covers a much higher percentage of our overall driving especially when commuting.

In that sense, making a system thats safer than humans is not insanely difficult, Voroninski says. After all, AI doesnt drink and drive.

But the autonomous bar is actually higher than that.

Simply achieving a system that has safety levels on par with a human is actually fairly tractable, in part because human failure modes are somewhat preventable, you know, things like inattention or aggressive driving, etc, Voroninski told me But in truth even achieving that level of safety is not sufficient to launch a scalable fleet. Really what you need is something thats much safer than a human.

After all, lawyers exist.

And liability for robotic autonomous systems is going to be an issue.

Waymo is Google's self-driving platform.

We currently still lack the legal and regulatory frameworks to deploy L5 technologies at scale both nationally and internationally, says Katrin Zimmermann, a managing director at automotive consulting group TLGG Consulting. Technology might enable you to drive in theory, but policy will allow you to drive in practice.

When solved, however, there are multiple trillion-dollar industries to address. Helm.ai is building technology for self-driving cars, naturally, but the technology is not only for personal vehicles or self-driving taxis. Its also for shipping. Delivery robots for last mile service. Service vehicles like street cleaners. Industrial machines that can navigate autonomously.

Solving safe and reliable autonomy unlocks Pandoras box of capability, and none too soon. We need autonomous systems for environmental reclamation on a global scale, safer manufacturing at lower cost, and a hundred other applications.

Pandoras box, of course, is a mixed blessing.Unlocking autonomy puts hundreds of millions of jobs at risk. Engineering a solution for that will require politicians as well as scientists.

For now, Helm.ai is focused on self-driving and focused on shipping its technology to any car brand that wants it.

What were looking to do is really to solve the critical AI piece of the puzzle for self driving cars and license the resulting software to auto manufacturers and fleets, Voroninski says. So you can sort of think about what were doing as kind of an Android model for self-driving cars.

Read the full transcript of our conversation.

Read the rest here:

The Android Of Self-Driving Cars Built A 100,000X Cheaper Way To Train AI For Multiple Trillion-Dollar Markets - Forbes

Graphcore’s AI chips now backed by Atomico, DeepMind’s Hassabis – TechCrunch

Is AI chipmaker Graphcore out to eat Nvidias lunch? Co-founder and CEO Nigel Toon laughs at that interview opener perhaps because he sold his previous company to the chipmaker back in 2011.

Im sure Nvidia will be successful as well, he ventures. Theyre already being very successful in this market And being a viable competitor and standing alongside them, I think that would be a worthy aim for ourselves.

Toon also flags what he couches an interesting absence in the competitive landscape vis-a-vis other major players that youd expect to be there e.g. Intel. (Though clearly Intel is spending to plug the gap.)

A recent report by analyst Gartner suggests AI technologies will be in almost every software product by 2020. The race for more powerful hardware engines to underpin the machine-learning software tsunami is, very clearly, on.

We started on this journey rather earlier than many other companies, says Toon. Were probably two years ahead, so weve definitely got an opportunity to be one of the first people out with a solution that is really designed for this application. And because were ahead weve been able to get the excitement and interest from some of these key innovators who are giving us the right feedback.

Bristol, UK based Graphcore has just closed a $30 million Series B round, led by Atomico, fast-following a $32M Series A in October 2016. Its building dedicated processing hardware plus a software framework for machine learning developers to accelerate building their own AI applications with the stated aim of becoming the leader in the market for machine intelligence processors.

In a supporting statement, Atomico Partner Siraj Khaliq, who is joining the Graphcore board, talks up its potential as being to accelerate the pace of innovation itself. Graphcores first IPU delivers one to two orders of magnitude more performance over the latest industry offerings, making it possible to develop new models with far less time waiting around for algorithms to finish running, he adds.

Toon says the company saw a lot of investor interest after uncloaking at the time of its Series A last October hence it decided to do an earlier than planned Series B. That would allow us to scale the company more quickly, support more customers, and just grow more quickly, he tells TechCrunch. And it still gives us the option to raise more money next year to then really accelerate that ramp after weve got our product out.

The new funding brings on board some new high profile angel investors including DeepMind co-founder DemisHassabis and Uber chief scientistZoubin Ghahramani. So you can hazard a pretty educated guess as to which tech giants Graphcore might be working closely with during the development phase of its AI processing system (albeit Toon is quick to emphasize that angels such as Hassabis are investing in a personal capacity).

We cant really make any statements about what Google might be doing, he adds. We havent announced any customers yet but were obviously working with a number of leading players here and weve got the support from these individuals which you can infer theres quite a lot of interest in what were doing.

Other angels joining the Series B includeOpenAIs Greg Brockman, Ilya Sutskever,Pieter Abbeel andScott Gray. While existing Graphcore investors Amadeus Capital Partners,Robert Bosch Venture Capital, C4 Ventures, Dell Technologies Capital, Draper Esprit, Foundation Capital, Pitango and Samsung Catalyst Fund also participated in the round.

Commenting in a statement, Ubers Ghahramani argues that current processing hardware is holding back the development of alternative machine learning approaches that he suggests could contribute to radical leaps forward in machine intelligence.

Deep neural networks have allowed us to make massive progress over the last few years, but there are also many other machine learning approaches, he says.A new type of hardware that can support and combine alternative techniques, together with deep neural networks, will have a massive impact.

Graphcore has raised around $60M to date with Toon saying its now 60-strong team has been working in earnest on the business for a full three years, though the company origins stretch back as far as 2013.

Co-founders Nigel Toon (CEO, left) and Simon Knowles (CTO, right)

In 2011 the co-founders sold their previous company, Icera which did baseband processing for 2G, 3G and 4G cellular technology for mobile comms to Nvidia. After selling that company we started thinking about this problem and this opportunity. We started talking to some of the leading innovators in the space and started to put a team together around about 2013, he explains.

Graphcore is building what it calls an IPU aka an intelligence processing unit offering dedicated processing hardware designed for machine learning tasks vs the serendipity of repurposed GPUs which have been helping to drive the AI boom thus far. Or indeed the vast clusters of CPUs otherwise needed (but not well suited) for such intensive processing.

Its also building graph-framework software for interfacing with the hardware, called Poplar, designed to mesh with different machine learning frameworks to enable developers to easily tap into a system that it claims will increase the performance of both machine learning training and inference by 10x to 100x vs the fastest systems today.

Toon says its hoping to get the IPU in the hands of early access customers by the end of the year. That will be in a system form, he adds.

Although at the heart of what were doing is were building a processor, were building our own chip leading edge process, 16 nanometer were actually going to deliver that as a system solution, so well deliver PCI express cards and well actually put that into a chassis so that you can put clusters of these IPUs all working together to make it easy for people to use.

Through next year well be rolling out to a broader number of customers. And hoping to get our technology into some of the larger cloud environments as well so its available to a broad number of developers.

Discussing the difference between the design of its IPU vs GPUs that are also being used to power machine learning, he sums it up thus: GPUs are kind of rigid, locked together, everything doing the same thing all at the same time, whereas we have thousands of processors all doing separate things, all working together across the machine learning task.

The challenge that [processing via IPUs] throws up is to actually get those processors to work together, to be able to share the information that they need to share between them, to schedule the exchange of information between the processors and also to create a software environment thats easy for people to program thats really where the complexity lies and thats really what we have set out to solve.

I think weve got some fairly elegant solutions to those problems, he adds. And thats really whats causing the interest around what were doing.

He says Graphcores team is aiming for a completely seamless interface between its hardware via its graph-framework and widely used high level machine learning frameworks including Tensorflow, Caffe2, MxNet and PyTorch.

You use the same environments, you write exactly the same model, and you feed it through what we call Poplar [a C++ framework], he notes. In most cases that will be completely seamless.

Although he confirms that developers working more outside the current AI mainstream say by trying to create new neural network structures, or working with other machine learning techniques such as decision trees or Markov field may need to make some manual modifications to make use of its IPUs.

In those cases there might be some primitives or some library elements that they need to modify, he notes. The libraries we provide are all open so they can just modify something, change it for their own purposes.

The apparently insatiable demand for machine learning within the tech industry is being driven at least in part by a major shift in the type of data that needs to be understood from text to pictures and video, says Toon. Which means there are increasing numbers of companies that really need machine learning. Its the only way they can get their head around and understand what this sort of unstructured data is thats sitting on their website, he argues.

Beyond that, he points to various emerging technologies and complex scientific challenges its hoped could also benefit from accelerated development of AI from autonomous cars to drug discovery with better medical outcomes.

A lot of cancer drugs are very invasive and have terrible side effects, so theres all kinds of areas where this technology can have a real impact, he suggests. People look at this and think its going to take 20 years [for AI-powered technologies to work] but if youve got the right hardware available [development could be sped up].

Look at how quickly Google Translate has got better using machine learning and that same acceleration I think can apply to some of these very interesting and important areas as well.

In a supporting statement, DeepMinds Hassabis also goes to far as to suggest that dedicated AI processing hardware might offer a leg up to the sci-fi holy grail goal of developing artificial general intelligence (vs the more narrow AIs that comprise the current cutting edge).

Building systems capable of general artificial intelligence means developing algorithms that can learn from raw data and generalize this learning across a wide range of tasks. This requires a lot of processing power, and the innovative architecture underpinning Graphcores processors holds a huge amount of promise, he adds.

Read more:

Graphcore's AI chips now backed by Atomico, DeepMind's Hassabis - TechCrunch

AI paired with data from drones rapidly forecasts flood damage : The Asahi Shimbun – Asahi Shimbun

A startup developed a system to quickly predict how flooding could affect surrounding areas, pairing artificial intelligence (AI) with drone technology.

The brainchild of Arithmer Inc., an entrepreneurial spinoff from the University of Tokyo, shows the possible flow of floodwaters from rivers and streams on a 3-D map, using measurement data from drones.

As the simulation can be completed within a few hours, compared with several months to some years under conventional methods, the invention is drawing attention from local governments nationwide.

In June, the coastal town of Hirono in Fukushima Prefecture became the first municipality to sign an agreement with Arithmer to introduce the technology. It is looking to utilize the system not only for forecasting damage from floods and tsunami but also for issuing disaster victim certificates faster.

According to Arithmer, which utilizes mathematical theories to develop AI programs, inquiries are pouring in from around the country.

Yoshihiro Ota, 48, president of Arithmer, who is also a modern mathematician, said the AI-based technology will prove helpful for both municipalities and businesses.

"Flooding estimates can be made by combining all available elements such as rainfall and where river embankments collapse, so our system will allow evacuation centers and factories to be set up at much safer locations," he said.

In torrential rainfall these days, inundation damage has been reported around small and midsize rivers and other locations that were not identified by local governments as dangerous.

That is, in part, because geographical data collected for estimates through aerial laser measuring and by other means typically do not cover an entire region. Another problem is that it takes much time to process a vast amount of data on rainfall and water flows, rendering it difficult to locate all areas that would likely be inundated.

The technology developed by Arithmer and its partners can create computerized reproductions of streets and rivers scanned by drones at a precision level of 1 centimeter by 1 cm.

It can also automatically complete the otherwise time-consuming flooding prediction promptly because the AI system learns characteristics of each area based on the estimated rainfall, water levels in rivers, the locations of dams and other factors.

While 100 or so scenarios can be simulated, it is possible to identify the worst possible case from among them.

See more here:

AI paired with data from drones rapidly forecasts flood damage : The Asahi Shimbun - Asahi Shimbun