Daily Archives: November 9, 2019

These American workers are the most afraid of A.I. taking their jobs – CNBC

Posted: November 9, 2019 at 8:42 am

The Terminator movie franchise is back, and the idea that robots and artificial intelligence are coming for us specifically, our jobs is a big part of the present. But the majority of the working population remains unafraid of a T-800 stealing their employment.

Only a little over one-quarter (27%) of all workers say they are worried that the job they have now will be eliminated within the next five years as a result of new technology, robots or artificial intelligence, according to the quarterly CNBC/SurveyMonkey Workplace Happiness survey.

Nevertheless, the survey results show it may be only a matter of time: Fears about automation and jobs run higher among the youngest workers.

The survey found that 37% of workers between the ages of 18 and 24 are worried about new technology eliminating their jobs. That's nearly 10% higher than any other demographic.

Dan Schawbel, research director of Future Workplace and author of "Back to Human," said one reason for the age-based fear gap is because technology, like AI, is becoming normalized.

"They are starting to see the value of [AI] and how it's impacting their personal and professional lives," Schawbel said. "We're using AI without even thinking about it. It's a part of our lives. If you are talking to Siri or Alexa, that's AI."

Laura Wronski, senior research scientist at SurveyMonkey, said, "As digital natives, [18- to 24-year-old workers] understand the potential of technology to have a positive impact. But with 30 or 40 years left in the workforce, they likely envision vast potential changes in the nature of work over the course of their lifetime."

The survey also revealed a link between income and fear, with 34% of workers making $50,000 or under afraid of losing their jobs due to technology; that goes down to 16% among workers making between $100,000 and $150,000, and 13% for workers making $150,000 or more.

In some industries where technology already has played a highly disruptive role, worker fears of automation also run higher than the average: Workers in automotives, business support and logistics, advertising and marketing, and retail are proportionately more worried about new technology replacing their jobs than those in other industries.

Forty-two percent of workers in the business support and logistics industry have above-average concerns about new technology eliminating their jobs. Schawbel said that fear stems from the fact that the industry is already seeing it happen. Self-driving trucks already are threatening the jobs of truck drivers, and it is causing massive panic in the profession, he said.

"There is a fear, with some research to back it up, that it's going to be hard to retrain and retool truck drivers to take on other jobs," Schawbel said. "You know with a truck driver you can just eliminate the truck driver, whereas with professionals doing finance or accounting, certain tasks that they do can be automated, but they have a little more flexibility to do other tasks that could be more valuable."

Elmer Guardado, a 22-year-old account coordinator at Buie & Co. Public Relations, fits two demographics that are more likely to worry about new technology replacing them: he is young, and he is in the advertising and marketing industry. But he remains convinced that human skills will set him apart from the automated competition.

"It's not something I'm actively worried about," Guardado said. "Because I know there are so many parts of my job that require a level of nuance that technology won't be able to replace anytime soon."

Guardado says that his communication skills are a valuable asset that he brings to the workplace that a computer can't compete with quite yet. But he also understands why his peers may be more afraid than other age groups.

"I think older generations maybe process this potential fear in a more abstract way," Guardado said. "Whereas 18- 24-year-olds see it firsthand, right? We actively dealt with it growing up and saw technology consistently skyrocket throughout our entire lifetime."

The survey found a fairly optimistic view on the future of AI, with nearly half of workers (48%) saying the quest to advance the field of artificial intelligence is "important." Only 23% called it "dangerous."

They remain more worried about their own kind: 60% of workers said that human intelligence is a greater threat to humanity than artificial intelligence. Sixty-five percent of survey respondents said computer programs will always reflect the biases of the people who designed them.

Read the rest here:

These American workers are the most afraid of A.I. taking their jobs - CNBC

Posted in Ai | Comments Off on These American workers are the most afraid of A.I. taking their jobs – CNBC

How Nvidia (NVDA) and AI Can Help Farmers Fight Weeds And Invasive Plants – Nasdaq

Posted: at 8:42 am

Agricultural fields are no less than a battlefield. Irrespective of terrain, geography and type, crops have to compete against scores of different weeds, species of hungry insects, nematodes and a broad array of diseases. Weeds, or invasive plants, aggressively compete for soil nutrients, light and water, posing a serious threat to agricultural production and biodiversity. Weeds directly and indirectly result in tremendous losses to the farm sector, which convert to billions each year worldwide.

To combat these challenges, the farm sector is looking at Artificial Intelligence (AI) based solutions. Heres a look at two such initiatives powered by NVIDIA Corporation (NVDA).

Invasive Plants

The damage wrought by plant pests and diseases can reach up to 40% of global crop yields each year as perestimatesby the Food and Agricultural Organization of the United States. Among the pests, weeds are considered an important bioticconstraintto food production. The competition for survival between weeds and crops reduces agricultural output both qualitatively and quantitatively.

It isestimatedthat the annual cost of weeds to Australian agriculture is $4 billion through yield losses and product contamination. The Weed Science Society of America (WSSA)reportsthat on an annual basis, potential loss in value for corn is $27 billion and for soybean it is $16 billion based on data from 2007 to 2013. In India, an assessment by the Directorate of Weed Researchshowsthat India loses crop worth $11 billion every year to weeds.

One of the most common ways to control weed is to spray the entire field with herbicides. This method involves significant cost, wastage, health problems and environmental pollution. While the real cost of weeds to the environment is difficult to calculate, it is expected that the cost would be similar to, if not greater than, that estimated for agricultural industries, according to anoteby the department of environment of Australia.

Enter AI

Today, advanced technologies are being increasingly applied to a number of industries and sectors, agriculture being one of them. One such technique is that of precision farming, which allows for farmers to reduce their use of chemical inputs, machinery and water for irrigation by using information about the soil, temperature, humidity, seeds, farm equipment, livestock, fertilizers, terrain, crops sown, and water usage, among other things.A growing number of companies and start-ups are creating AI-based agricultural solutions.

Cameras, sensors and AI on the fields allow farmers to manage their fields better and use pesticides more precisely. Blue River Technology's See & Spray uses computer vision and AI to detect, identify, and make management decisions about every single plant in the field. In 2017, Blue River Technology wasacquiredby Deere & Company (DE). Today the See & Spray, which is a 40-feet wide machine covering 12 rows of crops, is pulled by Deere tractors and is powered by Nvidia.

The machine uses around 30 mounted cameras to capture photos of plants every 50 milliseconds and these are processed through its on-board25 Jetson AGX Xaviersupercomputing modules. As the tractor continues to move, the Jetson Xavier modules running Blue Rivers image recognition algorithms make super quick decisions based on the image inputs received on whether it is a weed or crop plant. See & Spray machine has been able to achieve good success by using less than 1/10th the herbicide of typical weed control.

Further, a researchpaperpublished in 2018 by M Dian Bah, Adel Hafiane and Raphael Canals proposed a novel fully automatic learning method using convolutional neuronal networks (CNNs) with an unsupervised training dataset collection for weed detection from UAV images. Drone images of beet, bean and spinach crops were used for the study. The researchers used a cluster ofNVIDIA Quadro GPUsto train the neural networks. The researcherssaythat, using NVIDIA Quadro GPUs shrunk training time from one week on a high-end CPU down to a few hours. The study archived a precision of 93%, 81% and 69% for beet, spinach and bean, respectively.

While these initiatives are working on precision-based use of any chemical product in the fields, neural networks can be trained to detect infected areas in plants using images. One such study is being done on the detection of symptoms of disease in grape leaves. Early detection can play an important factor in preventing a serious disease and stop an epidemic spread in vineyards.

The use of technology can help in solving multiple problems faced by farmers, saving valuable resource and reduce the damage done to the environment. Thestatementby FOA chief, the future of agriculture is not input-intensive but technology-intensive aptly sums up the role that technology and technology providers will play in the farm sector.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Here is the original post:

How Nvidia (NVDA) and AI Can Help Farmers Fight Weeds And Invasive Plants - Nasdaq

Posted in Ai | Comments Off on How Nvidia (NVDA) and AI Can Help Farmers Fight Weeds And Invasive Plants – Nasdaq

Jim Goodnight, the ‘Godfather of A.I.,’ predicts the future fate of the US workforce – CNBC

Posted: at 8:42 am

Colin Anderson | Getty Images

Every technology revolution has a unique inflection point. The spark that ignited the artificial intelligence movement was a statistical data analysis system developed by Jim Goodnight when he was a statistics professor at North Carolina State University 45 years ago.

He never imagined that the technology he created to improve crop yields would evolve into sophisticated data analytics software, a precursor to modern day AI. Back then computers could only compute 300 instructions a second and had 8K of memory. Today they can execute 3 billion instructions a second and contain multiple terabytes of memory.

For more on tech, transformation and the future of work, join CNBC at the @ Work: People + Machines Summit in San Francisco on Nov. 4. Leaders from Dropbox, SAS, McKinsey and more will teach us how to balance the needs of today with the possibilities of tomorrow, and the winning strategies to compete.

Goodnight considered the Godfather of AI now sits at the helm of the world's largest privately held software companies by revenue: SAS Institute. Despite its low profile, last year the Cary, North Carolina-based company had revenues of $3.27 billion, thanks to analytic and AI platforms used by more than 83,000 businesses, governments and universities.

In an interview with CNBC, the CEO gives his views on how AI is changing the U.S. workforce and what lies ahead.

Over the last four decades, how has data analytics software evolved? Did you ever imagine it would change the world as much as it has?

No. It has been a game changer for society. At first we were using analytics software and doing balanced experiments. Today we have moved into forecasting. Neural networks, which mimic the way the human brain operates, and other machine learning tools are being used to do all sorts of predictions in a host of industries.

As computer speeds grow and the amount of data explodes, this technology has become critical.

How has it become a mainstream tool for business and public institutions?

It is used by nearly every industry in a variety of ways. Drug companies use it for clinical trial analysis. Utilities use it to predict peak demand for electricity. Retailers use it to assess buying patterns so they can figure out what sizes to stock. Banks also are using neural networks to detect credit card fraud and to prevent money laundering.

Areas where I see a surge in demand are 5G technology, connected devices, cloud services, autonomous driving, machine learning and fintech.

What is your forecast for AI over the next decade?

I believe we will see things like computer vision which involves machines capturing, processing and analyzing real-world images and video to extract information from the physical world being used. Anything we can see with our eyes we can train a computer to recognize as well. This will be transformative especially in the autonomous driving sector and in medicine.

Over the past few decades, sensors and image processors have been created to match or even exceed the human eye's capabilities. With larger, more optically perfect lenses and nanometer-scaled image sensors and processors, the precision and sensitivity of modern cameras are incredible, especially compared to common human eyes. Cameras can also record thousands of images per second, detect distances and see better in dark environments.

Already computer vision is making a difference in health care. The medical community is using it to interpret CT scans. SAS is working with Amsterdam University to identify the size of tumors in cancer patients more accurately.

How do you think it will change the workforce and the way companies manage operations?

The largest impact will be felt in the manufacturing industry on the factory floor. Robots with computer vision will become more sophisticated. The process has already begun; there are huge numbers of industrial robots already. Over the years, robots will take on many roles in the factory. Humans will be needed to maintain and program them.

But there are a lot of misconceptions about AI. We are nowhere near the time where robots can think like humans. That is an era far into the future. In today's world humans are needed to train these machines to recognize images and analyze data.

The talent war in the tech sector is fierce. How is SAS retaining and developing workers in this era?

Our turnover rate is 4%, and that is considered low in the tech industry, where rates hover around 14%. We lose a few people to larger tech companies, but we have no trouble replacing them. We do everything possible to make SAS Institute a great place to work, and that includes investing in training. The key is giving employees challenging work. That is more important to a tech worker than a salary.

SAS Institute founder and CEO Jim Goodnight (center) lets employees pitch big ideas that can help in developing innovative software products.

SAS Institute

We manage the company to unleash the power of creativity. We encourage creativity by having demo days so employees can share the products and technology they are working on, pitch management for funding or additional resources. Employees can also come to senior management meetings to pitch their ideas and innovations. Every employee is also expected to complete two training courses a year in a new software language so they can remain up to date on the latest technology.

What advice would you give other companies grappling with the skills shortage issue?

One thing is to create education and skills training programs to develop more data scientists in the U.S. We have partnered with 82 universities, such as Michigan State and the University of Arkansas, to develop master's programs for scientists trained on SAS software. Some of these programs are linked to local businesses that are looking for a talent pipeline.

This has been a big part of our outreach strategy. For example, at North Carolina State University we helped create the Institute for Advanced Analytics, which offers a one-year course simulating a work environment. It produces 120 graduates a year trained in SAS software.

Excerpt from:

Jim Goodnight, the 'Godfather of A.I.,' predicts the future fate of the US workforce - CNBC

Posted in Ai | Comments Off on Jim Goodnight, the ‘Godfather of A.I.,’ predicts the future fate of the US workforce – CNBC

Last Week In Venture: AI Chips, ML Anywhere, And Spreadsheets As Backends For Apps – Crunchbase News

Posted: at 8:42 am

Hello and welcome back to Last Week In Venture, the weekly recap of interesting deals which may have flown under your radar.

Seed and early-stage deals struck today are a lens through which to view potential futures. So lets take a quick look at a few interesting transactions from the week that was in venture-land.

It was a busy week at the intersection of hardware and machine learning.

You may have already heard about Neural Magic, the Somerville, MA-based startup which lets data scientists and AI engineers run machine learning models on commodity CPUs using its own, proprietary inference engine. The company says it can deliver GPU performance on central processing units, which is a big deal, considering that upfront cost of acquiring specialized compute hardware remains a barrier to entry into large-scale machine learning projects. This week, the company announced $15 million in seed funding led by Comcast Ventures. NEA, Andreessen Horowitz, Pillar Ventures, and Amdocs participated in the transaction.

On the other side of the market is Untether AI. Instead of developing software that runs on generalized hardware, the Toronto-based company makes specialized, high-efficiency inference chips utilizing a design which places the processor very close to onboard memory, reducing latency and energy use. This week the company announced $20 million in Series A funding, which they technically closed back in May. The company closed $13 million from Intel Capital in April and the remainder from Radical Ventures. As part of the transaction, founding CEO Martin Snelgrove transitions to a CTO role as seasoned chipmaker executive Arun Iyengar steps up as CEO of the company and Radical Ventures founding partner Tomi Poutanen joins its board.

You know whats actually pretty sweet? Spreadsheets. Theyre, like, totally tabular. Which is great for stuff like accounting, displaying lots of rows of data, and some more whimsical applications.

But just because some information might live on a spreadsheet doesnt mean it cant get dressed up a little. Glide Apps is a San Francisco-based, Y Combinator-backed company which helps its users build mobile apps which display and interact with data stored in a Google Sheet, all without needing to write a single line of code. The company produced a set of templates showing how Glide Apps can be used for a range of application use cases, ranging from a city guide to an investor update app.

This week, the company announced a new pro pricing tier, alongside $3.6 million in additional seed financing led by First Round Capital, with participation from Idealab, SV Angel, and the chief executives of GitHub and Figma.

The company says that, since its inception in 2018, tens of thousands of people have built Glide apps which have, collectively, reached over one million users.

Image Credits: Last Week In Venture graphic created byJD Battles. Photo by Billy Huynh, via Unsplash.

Follow this link:

Last Week In Venture: AI Chips, ML Anywhere, And Spreadsheets As Backends For Apps - Crunchbase News

Posted in Ai | Comments Off on Last Week In Venture: AI Chips, ML Anywhere, And Spreadsheets As Backends For Apps – Crunchbase News

Does Your AI Have Users’ Best Interests at Heart? – Harvard Business Review

Posted: at 8:42 am

Executive Summary

We now live in a world built on machine learning and AI, which relies on data as its fuel, and which in the future will support everything from precision agriculture to personalized healthcare. The next generation of platforms will even recognize our emotions and read our thoughts. For leaders in the Algorithmic Age, simply following the rules has never looked more perilous, nor more morally insufficient. As we create systems that are more capable of understanding and targeting services at individual users, our capacity to do evil by automating bias and weaponizing algorithms will grow exponentially. And yet, this also raises the question of what exactly is evil? Is it breaking the law, breaking your industry code of conduct, or breaking user trust? Rather than relying on regulation, leaders must instead walk an ethical tight rope. Your customers will expect you to use their data to create personalized and anticipatory services for them while demanding that you prevent the inappropriate use and manipulation of their information. As you look for your own moral compass, one principle is apparent: You cant serve two masters. In the end, you either build a culture based on following the law, or you focus on empowering users. The choice might seem to be an easy one, but it is more complex in practice.

Ethical decisions are rarely easy. Now, even less so. Smart machines, cheap computation, and vast amounts of consumer data not only offer incredible opportunities for modern organizations, they also present a moral dilemma for 21st century leaders too: Is it OK, as long as its legal?

Certainly, there will be no shortage of regulation in the coming years. For ambitious politicians and regulators, Big Tech is starting to resemble Big Tobacco with the headline-grabbing prospect of record fines, forced break-ups, dawn raids, and populist public outrage. Yet for leaders looking for guidance in the Algorithmic Age, simply following the rules has never looked more perilous, nor more morally insufficient.

Dont get me wrong. A turbulent world of AI- and data-powered products requires robust rules. Given the spate of data breaches and abuses in recent years, Googles former unofficial motto, Dont be evil, now seems both prescient and naive. As we create systems that are more capable of understanding and targeting services at individual users, our capacity to do evil by automating bias and weaponizing algorithms will grow exponentially. And yet, this also raises the question of what exactly is evil? Is it breaking the law, breaking your industry code of conduct, or breaking user trust?

Building fair and equitable machine learning systems.

Algorithmic bias can take many forms it is not always as clear cut as racism in criminal sentencing or gender discrimination in hiring. Sometimes too much truth is just as dangerous. In 2013, researchers Michal Kosinski, David Stillwell, and Thore Graepel published an academic paper that demonstrated that Facebook likes (which were publicly open by default at that time) could be used to predict a range of highly sensitive personal attributes, including sexual orientation and gender, ethnicity, religious and political views, personality traits, use of addictive substances, parental separation status, and age.

Disturbingly, even if you didnt reveal your sexual orientation or political preferences, this information could still be statistically predicted by what you did reveal. So, while less than 5% of users identified as gay were connected with explicitly gay groups, their preference could still be deduced. When they published their study, the researchers acknowledged that their findings risked being misused by third parties to incite discrimination, for example. However, where others saw danger and risk, Aleksandr Kogan, one of Kosinskis colleagues at Cambridge University, saw opportunity. In early 2014, Cambridge Analytica, a British political consulting firm, signed a deal with Kogan for a private venture that would capitalize on the work of Kosinski and his team.

Kogan was able to create a quiz, thanks to an initiative at Facebook that allowed third parties to access user data. Almost 300,000 users were estimated to have taken that quiz. It later emerged that Cambridge Analytica then exploited the data it had harvested via the quiz to access and build profiles on 87 million Facebook users. Arguably, neither Facebook nor Cambridge Analyticas decisions were strictly illegal, but in hindsight and in context of the scandal the program soon unleashed they could hardly be called good judgment calls.

According to Julian Wheatland, COO of Cambridge Analytica at the time, the companys biggest mistake was believing that complying with government regulations was enough, and thereby ignoring broader questions of data ethics, bias and public perception.

How would you have handled a similar situation? Was Facebooks mistake a two-fold one of not setting the right policies for handling their user data upfront, and sharing that information too openly with their partners? Should they have anticipated the reaction of the U.S. senators who eventually called a Congressional hearing, and spent more resources on lobby groups? Would a more comprehensive user agreement have shielded Facebook from liability? Or was this simply a case of bad luck? Was providing research data to Kogan a reasonable action to take at the time?

By contrast, consider Apple. When Tim Cook took the stage to announce Apples latest and greatest products for 2019, it was clear that privacy and security, rather than design and speed, were now the real focus. From eliminating human grading of Siri requests to warnings on which apps are tracking your location, Apple was attempting to shift digital ethics out of the legal domain, and into the world of competitive advantage.

Over the last decade, Apple has been criticized for taking the opposing stance on many issues relative to its peers like Facebook and Google. Unlike them, Apple runs a closed ecosystem with tight controls: you cant load software on an iPhone unless it has been authorized by Apple. The company was also one of the first to fully encrypt its devices, including deploying end-to-end encryption on iMessage and FaceTime for communication between users. When the FBI demanded a password to unlock a phone, Apple refused and went to court to defend its right to do so. When the company launched Apple Pay and more recently their new credit card, it kept customer transactions private rather than recording all the data for its own analytics.

While Facebooks actions may have been within the letter of the law, and within the bounds of industry practice, at the time, they did not have the users best interests at heart. There may be a simple reason for this. Apple sells products to consumers. At Facebook, the product is the consumer. Facebook sells consumers to advertisers.

Banning all data-collection is futile. There is no going back. We already live in a world built on machine learning and AI, which relies on data as its fuel, and which in the future will support everything from precision agriculture to personalized healthcare. The next generation of platforms will even recognize our emotions and read our thoughts.

Rather than relying on regulation, leaders must instead walk an ethical tight rope. Your customers will expect you to use their data to create personalized and anticipatory services for them while demanding that you prevent the inappropriate use and manipulation of their information. As you look for your own moral compass, one principle is apparent: You cant serve two masters. In the end, you either build a culture based on following the law, or you focus on empowering users. The choice might seem to be an easy one, but it is more complex in practice. Being seen to do good is not the same as actually being good.

Thats at least one silver lining when it comes to the threat of robots taking our jobs. Who better to navigate complex, nuanced, and difficult ethical judgments than humans themselves? Any machine can identify the right action from a set of rules, but actually knowing and understanding what is good thats something inherently human.

Go here to see the original:

Does Your AI Have Users' Best Interests at Heart? - Harvard Business Review

Posted in Ai | Comments Off on Does Your AI Have Users’ Best Interests at Heart? – Harvard Business Review

OpenAI Just Released the AI It Said Was Too Dangerous to Share – Futurism

Posted: at 8:42 am

Here You Go

In February, artificial intelligence research startup OpenAI announced thecreation of GPT-2, an algorithm capable of writing impressively coherentparagraphs of text.

But rather than release the AI in its entirety, the team shared only a smaller model out of fear that people would use the more robust tool maliciously to produce fake news articles or spam, for example.

But on Tuesday, OpenAI published a blog post announcing its decision to release the algorithm in full as it has seen no strong evidence of misuse so far.

According to OpenAIs post, the company did see some discussion regarding the potential use of GPT-2 for spam and phishing, but it never actually saw evidence of anyone misusing the released versions of the algorithm.

The problem might be that, while GPT-2 is one of if not the best text-generating AIs in existence, it still cant produce content thats indistinguishable from text written by a human. And OpenAI warns its those algorithms well have to watch out for.

We think synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent, the startup wrote.

READ MORE: OpenAI has published the text-generating AI it said was too dangerous to share [The Verge]

More on OpenAI: Now You Can Experiment With OpenAIs Dangerous Fake News AI

Read the original:

OpenAI Just Released the AI It Said Was Too Dangerous to Share - Futurism

Posted in Ai | Comments Off on OpenAI Just Released the AI It Said Was Too Dangerous to Share – Futurism

EU competition commissioner Margrethe Vestager says there’s ‘no limit’ to how AI can benefit humans – INSIDER

Posted: at 8:42 am

EU competition commissioner Margrethe Vestager, a frequent opponent to Silicon Valley tech firms, says she sees "no limit to how AI can support what we do as humans."

Given the Dane's status as arguably the most aggressive regulator of big tech on the planet she hit Google with a 4.3 billion ($4.75 billion) fine in July 2018 and ordered Apple to pay Ireland back 13 billion ($14.3 billion) in "illegal" tax benefits in 2016 Vestager's optimism about AI could be viewed as surprising.

On the flipside, her positivity about AI's potential could be viewed as highly consistent with strinent approach to regulating big tech: given how integral big tech is to AI research and development, Vestager's approach more likely reflects her keenness that big tech doesn't jeopardize AI's potential.

In September, the EU appointed Vestager to a role titled "Executive Vice President for A Europe fit for the Digital Age," effectively a continuation of her competition commission job, but with increased powers and oversight. It will see her set the agenda for the EU's regulation of artificial intelligence, among other regulatory duties.

Discussing the role at the Web Summit tech conference in Lisbon, Portugal on Thursday, Vestager said: "The first thing we will do is, of course, to listen very, very carefully, and we'll try to listen fast, because as we're speaking, AI is developing."

"That is wonderful, because I see no limits to how artificial intelligence can support what we want to do as humans," she continued. "Take climate change. I think we can be much more effective in fighting climate change if we use artificial intelligence.

"I think we can save people awful, stressful waiting time between having been examined by a doctor and having the result of that examination, and maybe also more precise results in doing that. So I think the benefits of using artificial intelligence [have] no limits," she said.

"But we need to get in control of certain cornerstones so that we can trust it, and it has human oversight, and very importantly that it doesn't have bias."

See the rest here:

EU competition commissioner Margrethe Vestager says there's 'no limit' to how AI can benefit humans - INSIDER

Posted in Ai | Comments Off on EU competition commissioner Margrethe Vestager says there’s ‘no limit’ to how AI can benefit humans – INSIDER

THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis – Business Insider India

Posted: at 8:42 am

The insurance sector has fallen behind the curve of financial services innovation - and that's left hundreds of billions in potential cost savings on the table.

The most valuable area in which insurers can innovate is the use of artificial intelligence (AI): It's estimated that AI can drive cost savings of $390 billion across insurers' front, middle, and back offices by 2030, according to a report by Autonomous NEXT seen by Business Insider Intelligence. The front office is the most lucrative area to target for AI-driven cost savings, with $168 billion up for grabs by 2030.

In the AI in Insurance Report, Business Insider Intelligence will examine AI solutions across key areas of the front office - customer service, personalization, and claims management - to illustrate how the technology can significantly enhance the customer experience and cut costs along the value chain. We will look at companies that have accomplished these goals to illustrate what insurers should focus on when implementing AI, and offer recommendations on how to ensure successful AI adoption.

The companies mentioned in this report are: IBM, Lemonade, Lloyd's of London, Next Insurance, Planck, PolicyPal, Root, Tractable, and Zurich Insurance Group.

Here are some of the key takeaways from the report:

In full, the report:

Interested in getting the full report? Here are two ways to access it:

More here:

THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis - Business Insider India

Posted in Ai | Comments Off on THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis – Business Insider India

AI is making literary leaps now we need the rules to catch up – The Guardian

Posted: at 8:42 am

Last February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation all without task-specific training.

If true, this would be a big deal. But, said OpenAI, due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

Given that OpenAI describes itself as a research institute dedicated to discovering and enacting the path to safe artificial general intelligence, this cautious approach to releasing a potentially powerful and disruptive tool into the wild seemed appropriate. But it appears to have enraged many researchers in the AI field for whom release early and release often is a kind of mantra. After all, without full disclosure of program code, training dataset, neural network weights, etc how could independent researchers decide whether the claims made by OpenAI about its system were valid? The replicability of experiments is a cornerstone of scientific method, so the fact that some academic fields may be experiencing a replication crisis (a large number of studies that prove difficult or impossible to reproduce) is worrying. We dont want the same to happen to AI.

On the other hand, the world is now suffering the consequences of tech companies like Facebook, Google, Twitter, LinkedIn, Uber and co designing algorithms for increasing user engagement and releasing them on an unsuspecting world with apparently no thought of their unintended consequences. And we now know that some AI technologies for example generative adversarial networks are being used to generate increasingly convincing deepfake videos.

If the row over GPT-2 has had one useful outcome, it is a growing realisation that the AI research community needs to come up with an agreed set of norms about what constitutes responsible publication (and therefore release). At the moment, as Prof Rebecca Crootof points out in an illuminating analysis on the Lawfare blog, there is no agreement about AI researchers publication obligations. And of all the proliferating ethical AI guidelines, only a few entities explicitly acknowledge that there may be times when limited release is appropriate. At the moment, the law has little to say about any of this so were currently at the same stage as we were when governments first started thinking about regulating medicinal drugs.

In the case of GPT-2, my hunch is that fears about its pathogenic propensities may be overdone not because it doesnt work, but because humans have long experience of dealing with print fakery. Ever since Gutenberg, people have been printing falsehoods and purporting to be someone else. But over the centuries, weve developed ways of spotting fakes. Accordingly, machine-generated text poses less of a problem than video deepfakes.

GPT-2s capabilities are undoubtedly impressive, though. In a fascinating essay, I, Language Robot, the neuroscientist and writer Patrick House reports on his experience of working alongside OpenAIs language model which produces style-matched prose to any written prompt that its fed.

It was, he recounts, an instructive exercise. The language robot and I never wrote the same story, he recalls. It became, eventually, a writing tool, both as a curator to new ideas and a splint for bad ones. When, after a few months, I lost access to it, my brain felt like it had been lesioned. I missed it. I had found myself jumping into its head, asking what it might write as one would a human writing partner, which made me wonder: should I give it the same theory-of-mind, inductive respect as I give to other thinking things? Could I even help but do otherwise?

The most fascinating part of Houses report deals with his idea of asking the robot the same questions that George Plimpton asked Ernest Hemingway in his famous Paris Review interview in 1958. In reply to the question whether it remembered the exact moment when it decided to be a writer, it replied: I dont remember ever wanting not to be a writer. Writing had always been my way out. My escape. When I was a little kid, my sister was always coming home from school with homework assignments and I knew if I got hers and mine done early I could spend the next days lunch hour looking at books. It was the first time that I actually thought that being a writer might happen.

And when asked what it did when it got stuck, it replied: Its hard to say, but I would say that it would be better not to have a muse at all, to do all that writing you think is going to be good without any conscious control over what you do. Its better to be totally in charge of your own actions.

At this point, the reader gets the eerie uncanny valley feeling: this is almost, but not quite, authentic. But the technology is getting there. Why, any day now itll be writing newspaper columns.

The web aint the worldA good report by the Reuters Institute at the University of Oxford challenges conventional wisdom by finding that most people still get their news from offline sources.

Culinary conditioning TheConversation.com has an intriguing essay How steak became manly and salads became feminine by Yale historian Paul Freedman.

Its a bots worldRenee DiResta has written an insightful piece on the algorithmic public sphere called There are bots. Look around at Ribbonfarm.com.

Continue reading here:

AI is making literary leaps now we need the rules to catch up - The Guardian

Posted in Ai | Comments Off on AI is making literary leaps now we need the rules to catch up – The Guardian

Nvidia Exec: We Need Partners To Push GPU-Based AI Solutions – CRN: The Biggest Tech News For Partners And The IT Channel

Posted: at 8:42 am

Nvidia sales executive Kevin Connors says channel partners play an important role in the chipmaker's strategy for selling and supporting GPU-accelerated solutions for artificial intelligence a market that is still in its early stages and can provide the channel major growth opportunities as a result.

"People are wanting higher performance computing at supercomputing levels, so that they can solve the world's problems, whether it's discovery of the next genome or better analysis and other such workloads," Connors, Nvidia's vice president of sales, global partners, said in an interview with CRN.

[Related: Ian Buck On 5 Big Bets Nvidia Is Making In 2020]

The Santa Clara, Calif.-based company's GPUs have become increasingly important in high-performance computing and artificial intelligence workloads, thanks to the parallel computing capabilities offered by their large number of cores and the substantial software ecosystem Nvidia has built around its CUDA platform, also known as Compute Unified Device Architecture, which debuted in 2007.

"As a company, we've always been focused on solving tough problems, problems that no one else could solve, and we invested in that. And so when we came out with CUDA which allowed application developers to port their high-performance computing apps, their scientific apps, engineering apps to our GPU platform that really began the process of developing a very rich ecosystem for high-performance computing," said Connors, who has been with Nvidia since 2006.

As a result, Nvidia's go-to-market strategy has significantly changed since when the company was mostly selling GPUs to consumers, OEMs and system builders who build gaming PCs. Now the company also sells entire platforms, such as the DGX, to make it easier for enterprises to embrace GPU computing.

"A lot of the enterprises are now looking at these new technologies, new capabilities to improve business outcomes, whether it's predictive analytics, forecasting maintenance. Things that AI can be applied to improve business outcomes is really is the competitive advantage of these industries," Connors said. "And this is where we invest a lot in terms of bringing this market, elevating the level of understanding and competency of these solutions and how they can affect business."

DGX, in particular, is an Nvidia-branded set of servers and workstations that designed to help enterprises get started on developing AI and data science applications. The most recent product in the lineup, the DGX-2, is a server appliance that comes with 16 Nvidia Tesla V100 GPUs.

"The DGX is essentially what we would call the tip of the spear. It engages deeply into some enterprises, we learn from those experiences. It's an accelerant to developing an AI application. And so that was a great tool for kick-starting AI within the enterprise, and it's been wildly successful," Connors said.

Justin Emerson, solutions director for AI and machine learning at Herndon, Va.-based Nvidia partner ePlus Technology, said the value proposition of DGX is "around the software stack, enterprise support and reference architecture" and the fact that "it's ready go out of the box."

"We see DGX as the vehicle to deliver GPUs because they provide a lot of relief to the pain points many customers will see," Emerson said.

To bring products and platforms like DGX to market, Nvidia relies on its Nvidia Partner Network, the company's partner program that consists of value-added resellers, system integrators, OEMs, distributors, cloud service providers and solution advisors.

Connors said the Nvidia Partner Network has a tiered membership, which means that while all members have access to base resources, such as training courses, partners who reach certain revenue targets and training goals will receive more advanced resources.

"Our strategy is really to reach out and recruit, nurture, develop, train, enable partners that want to do the same, meaning they want to build out a deep learning practice, for example," he said. "They want to have the expertise, the competency and also the confidence to go to a customer and solve some of their problems with our technology."

Deep Learning Institute, vCompute Give Partners New Ways To Drive AI Adoption

One of the newer ways Nvidia is pushing AI solutions is its new vComputeServer software, which allows IT administrators to flexibly manage and deploy GPU resources for AI, high-performance computing and analytics workloads using GPU-accelerated virtual machines. The chipmaker's partners for vCompute include VMware, Nutanix, Red Hat, Dell, Hewlett Packard Enterprise and Amazon Web Services.

Connors said the new capability, which launched at VMware's VMworld conference in August, is a continuation of the chipmaker's push into virtualization solutions that began with its GRID platform for virtual desktop infrastructure.

"That opens up the aperture for virtualizing a GPU quite dramatically, because now we're virtualizing the server infrastructure," he said. "So we're not just virtualizing the client PC, we can actually virtualize the server. It can work with a lot of different workloads, containerized or otherwise, that are running on a GPU. So that's a pretty exciting space for us."

But pushing for greater AI adoption isn't just about selling GPUs and GPU-accelerated platforms like DGX and vCompute. Education is a key component for Nvidia's partners, which is why the chipmaker has set up Deep Learning Institute. The company offers the courses to customers and partners direct, but it can also enable partners to resell and provide the courses themselves.

"That's an amazing educational tool that delivers hands-on training for data scientists to learn about these frameworks, learn about how to develop these deep neural networks, and we branched out, so that it's not just general AI," Connors said. "We actually have the industry-specific DLI for automotive, autonomous vehicles, finance, healthcare, even digital content creation, even game development."

Mike Trojecki, vice president of IoT and analytics at New York-based Nvidia partner Logicalis, said his company is seeing opportunities around Nvidia's DGX platform for research and development.

"When you look at the research and development side of things, what we're trying to help our customers do is we're helping them reduce the complexity of AI workloads," he said. "When we look at it with CPU-based AI, there's performance limitations and cost increases, so we're really trying to put together a package for them so they dont have to put these pieces together."

Trojecki said Logicalis plans to take "significant advantage" of Nvidia's Deep Learning Institute program as a way to help customers understand what solutions are available and what skills they need.

"With DLI, we're able to bring our customers in to start that education journey," he said. "For us as an organization, getting customers in the room is always a good thing."

Emerson, the solutions architect at ePlus, said his company also offers Deep Learning Institute courses, but the company has also found value in creating its own curriculum around managing AI infrastructure.

"Just like in the late aughts when people bought new boxes with as much cores and memory to virtualize, there's going to be a massive infrastructure investment in accelerated computing, whether that's GPUs or something else," he said. "That's the thing that I think is going to be a big opportunity for Nvidia and ePlus: changing the way people build infrastructure.

View original post here:

Nvidia Exec: We Need Partners To Push GPU-Based AI Solutions - CRN: The Biggest Tech News For Partners And The IT Channel

Posted in Ai | Comments Off on Nvidia Exec: We Need Partners To Push GPU-Based AI Solutions – CRN: The Biggest Tech News For Partners And The IT Channel