Artificial intelligence takes scam to a whole new level – The Jackson Sun

RANDY HUTCHINSON, Better Business Bureau Published 12:54 a.m. CT Jan. 1, 2020

Imagine you wired hundreds of thousands of dollars somewhere based on a call from your boss, whose voice you recognized, only to find out you were talking to a machine and the money is lost. One company executive doesnt have to imagine it happening. He and his company were victims of what some experts say is one of the first cases of voice-mimicking software, a form of artificial intelligence (AI), being used in a scam.

In a common version of the Business Email Compromise scam, an employee in a companys accounting department wires money somewhere based on what appears to be a legitimate email from the CEO, CFO or other high-ranking executive. I wrote a column last year noting that reported losses to the scam had grown from $226 million in 2014 to $676 million in 2017. The FBI says losses doubled in 2018 to $1.8 billion and recommends making a phone call to verify the legitimacy of the request rather than relying on an email.

But now you may not even be able to trust voice instructions. The CEO of a British firm received what he thought was a call from the CEO of his parent company in Germany instructing him to wire $243,000 to the bank account of a supplier in Hungary. The call was actually originated by a crook using AI voice technology to mimic the bosss voice. The crooks moved the money from Hungary to Mexico to other locations.

An executive with the firms insurance company, which ultimately covered the loss, told The Wall Street Journal that the victim recognized the subtle German accent in his bosss voice and moreover that it carried the mans melody. The victim became suspicious when he received a follow-up call from the boss that originated in Austria requesting another payment be made. He didnt make that one, but the damage was already done.

Google says crooks may also synthesize speech to fool voice authentication systems or create forged audio recordings to defame public figures. It launched a challenge to researchers to develop countermeasures against spoofed speech.

Many companies are working on voice-synthesis software and some of it is available for free. The insurer thinks the crooks used commercially available software to steal the $243,000 from its client.

Many scams rely on victims letting their emotions outrun their common sense. An example is the Grandparent Scam, in which an elderly person receives a phone call purportedly from a grandchild in trouble and needing money. Victims have panicked and wired thousands of dollars before ultimately determining that the grandchild was safe and sound at home.

The crooks often invent some reason why the grandchilds voice may not sound right, such as the child having been in an accident or it being a poor connection. How much more successful might that scam be if the voice actually sounds like the grandchild? The executive who wired the $243,000 said he thought the request was strange, but the voice sounded so much like his boss that he felt he had to comply.

The BBB recommends companies install additional verification steps for wiring money, including calling the requestor back on a number known to be authentic.

Randy Hutchinson is the president of the Better Business Bureau of the Mid-South. Reach him at 901-757-8607.

Read or Share this story: https://www.jacksonsun.com/story/news/2020/01/01/artificial-intelligence-takes-scam-whole-new-level/2719833001/

Continued here:

Artificial intelligence takes scam to a whole new level - The Jackson Sun

Global Industrial Artificial Intelligence Market 2019 Research by Business Analysis, Growth Strategy and Industry Development to 2024 – Food &…

In the market research study namely, Global Industrial Artificial Intelligence Market 2019 by Manufacturers, Countries, Type and Application, Forecast to 2024, a comprehensive discussion on the market current flow and patterns, market share, sales volume, informative diagrams, industry development drivers, supply and demand, and other key aspects has been given. Its an important component for various stakeholders like traders, CEOs, buyers, providers, and others. The report provides guidance to exploring opportunities in the market by adding global and regional data as well as over top key players profiles. The global Industrial Artificial Intelligence market research file is an in-depth analysis that focuses on market development trends, opportunities, challenges, drivers, and limitations.

The market is analyzed by companies, and regions based on rate, value and gross. The report tracks the major market events such as product launches, technological developments, mergers and acquisitions, and the innovative business strategies acquired by key market players. The report contains types and applications appreciable consumption figures. The report highlights the markets current and conjecture development progress areas. It covers an in-depth analysis of the market size (revenue), market share, major market segments, and different geographic zones, the forecast for 2019-2024, and key market players.

DOWNLOAD FREE SAMPLE REPORT: https://www.fiormarkets.com/report/global-industrial-artificial-intelligence-market-2018-by-manufacturers-299826.html#sample

This report focuses on top manufacturers in global Industrial Artificial Intelligence market, with production, price, revenue, and market share for each manufacturer, covering: Intel Corporation, Siemens AG, IBM Corporation, Alphabet Inc, Microsoft Corporation, Cisco Systems, Inc, General Electric Company, Data RPM, Sight Machine, General Vision, Inc, Rockwell, Automation Inc, Mitsubishi Electric Corporation, Oracle Corporation, SAP SE

What Makes The Report Excellent?

The report offers information on market segmentation by type, application, and regions. The report specifies which product has the highest penetration, profit margins, and R&D status. The research covers the current market size of the global Industrial Artificial Intelligence market and its growth ratio based on history statistics 2014-2018. Each company profiled in the report is assessed for its market growth.

This Report Segments The Market:

Market by product type, 2014-2024: Type 1, Type 2, Others

Market by application, 2014-2024: Application 1, Application 2, Others

For a comprehensive understanding of market dynamics, the Industrial Artificial Intelligence market is analyzed across key geographies namely: North America (United States, Canada and Mexico), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India and Southeast Asia), South America (Brazil, Argentina, Colombia etc.), Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa)

ACCESS FULL REPORT: https://www.fiormarkets.com/report/global-industrial-artificial-intelligence-market-2018-by-manufacturers-299826.html

Following Queries Are Answered In The Report:-

Moreover, the global Industrial Artificial Intelligence market report calculates the production and consumption rate. Upstream raw material suppliers and downstream buyers of this industry are explained. A competitive dashboard or company share analysis is also covered. It executes through various research findings, deals, retailers, merchants, conclusion, data sources, and appendix.

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

This post was originally published on Food and Beverage Herald

View post:

Global Industrial Artificial Intelligence Market 2019 Research by Business Analysis, Growth Strategy and Industry Development to 2024 - Food &...

THE AI IN TRANSPORTATION REPORT: How automakers can use artificial intelligence to cut costs, open new revenue – Business Insider India

This is a preview of a research report from Business Insider Intelligence. To learn more about Business Insider Intelligence, click here. Current subscribers can log in and read the report here.

New technology is disrupting legacy automakers' business models and dampening consumer demand for purchasing vehicles. Tech-mediated models of transportation - like ride-hailing, for instance - are presenting would-be car owners with alternatives to purchasing vehicles.

In fact, a study by ride-hailing giant Lyft found that in 2017, almost 250,000 of its passengers sold their own vehicle or abandoned the idea of replacing their current car due to the availability of ride-hailing services.

This will enable automakers to take advantage of what will amount to billions of dollars in added value. For example, self-driving technology will present a $556 billion market by 2026, growing at a 39% CAGR from $54 billion in 2019, per Allied Market Research.

But firms face some major hurdles when integrating AI into their operations. Many companies are not presently equipped to begin producing AI-based solutions, which often require a specialized workforce, new infrastructure, and updated security protocol. As such, it's unsurprising that the main barriers to AI adoption are high costs, lack of talent, and lack of trust. Automakers must overcome these barriers to succeed with AI-based projects.

In The AI In Transportation Report, Business Insider Intelligence will discuss the forces driving transportation firms to AI, the market value of the technology across segments of the industry, and the potential barriers to its adoption. We will also show how some of the leading companies in the space have successfully overcome those barriers and are using AI to adapt to the digital age.

Here are some key takeaways from the report:

In full, the report:

The choice is yours. But however you decide to acquire this report, you've given yourself a powerful advantage in your understanding of AI in transportation.

Go here to see the original:

THE AI IN TRANSPORTATION REPORT: How automakers can use artificial intelligence to cut costs, open new revenue - Business Insider India

IIT Hyderabad to collaborate with Telangana government on artificial intelligence – India Today

IIT Hyderabad will also assist Telangana State in developing a strategy for artificial intelligence.

Indian Institute of Technology (IIT) Hyderabad is going to collaborate with the Government of Telangana for research on artificial intelligence. The institute is partnering with the Information Technology, Electronics and Communication (ITE&C) Department, Government of Telangana, for building/identifying quality datasets, along with third parties such as the industry.

They will also work on education and training to prepare and deliver content and curriculum on AI courses to be delivered to college students along with industry participants.

The MoU was signed by BS Murty, Director, IIT Hyderabad, and Jayesh Ranjan, IAS, Principal Secretary to Government of Telangana, Departments of Information Technology (IT) and Industries and Commerce (I&C) during an event held on January 2 as part of '2020: Declaring Telangana's Year of AI' initiative. Several other MoUs with other organizations were also signed by the Government of Telangana during this occasion.

The Telangana government declared 2020 as the 'Year of Artificial Intelligence' with the objective of promoting its use in various sectors ranging from urban transportation and healthcare to agriculture and others. The ITE&C Department aims to develop the ecosystem for the industry and to leverage emerging technologies for improving service delivery as part of this collaboration.

IIT Hyderabad will also assist the Telangana State in developing a strategy for AI/HPC (Artificial Intelligence / High-Performance Computing) infrastructure for various state needs and provide technology mentorship to identified partners for exploring and building AI PoCs (Point of Contacts).

The Telangana State Information Technology, Electronics and Communication Department's (ITE&C Department) is a Telangana Government department with a mandate to promote the use of Information Technology (IT) and act as a promoter/facilitator in the field of Information Technology in the state and build an IT driven continuum of Government services.

The vision of the ITE&C department is to leverage IT not only for effective and efficient governance, but also for sustainable economic development and inclusive social development. Its mission is to facilitate collaborative and innovative IT solutions, and to plan for the future growth while protecting and enhancing the quality of life.

Read: IIT Hyderabad researcher finds people from rural Bihar migrate to urban areas but do not settle

Also read: IIT Hyderabad researchers unravel working of protein that repairs damaged DNA

Read the original post:

IIT Hyderabad to collaborate with Telangana government on artificial intelligence - India Today

Revisiting the rise of A.I.: How far has artificial intelligence come since 2010? – Digital Trends

2010 doesnt seem all that long ago. Facebook was already a giant, time-consuming leviathan; smartphones and the iPad were a daily part of peoples lives; The Walking Dead was a big hit on televisions across America; and the most talked-about popular musical artists were the likes of Taylor Swift and Justin Bieber. So pretty much like life as we enter 2020, then? Perhaps in some ways.

One place that things most definitely have moved on in leaps and bounds, however, is on the artificial intelligence front. Over the past decade, A.I. has made some huge advances, both technically and in the public consciousness, that mark this out as one of the most important ten year stretches in the fields history. What have been the biggest advances? Funny you should ask; Ive just written a list on exactly that topic.

To most people, few things say A.I. is here quite like seeing an artificial intelligence defeat two champion Jeopardy! players on prime time television. Thats exactly what happened in 2011, when IBMs Watson computer trounced Brad Rutter and Ken Jennings, the two highest-earning American game show contestants of all time at the popular quiz show.

Its easy to dismiss attention-grabbing public displays of machine intelligence as being more about hype-driven spectacles than serious, objective demonstrations. What IBM had developed was seriously impressive, though. Unlike a game such as chess, which features rigid rules and a limited board, Jeopardy! is less easily predictable. Questions can be about anything and often involve complex wordplay, such as puns.

I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy! was still decades away, Jennings told me when I was writing my book Thinking Machines. Or at least I thought that it was. At the end of the game, Jennings scribbled a sentence on his answer board and held it up for the cameras. It read: I for one welcome our new robot overlords.

October 2011 is most widely remembered by Apple fans as the month in which company co-founder and CEO Steve Jobs passed away at the age of 56. However, it was also the month in which Apple unveiled its A.I. assistant Siri with the iPhone 4s.

The concept of an A.I. you could communicate with via spoken words had been dreamed about for decades. Former Apple CEO had, remarkably, predicted a Siri-style assistant back in the 1980s; getting the date of Siri right almost down to the month. But Siri was still a remarkable achievement. True, its initial implementation had some glaring weaknesses, and Apple arguably has never managed to offer a flawless smart assistant.Nonetheless, it introduced a new type of technology that was quickly pounced on for everything from Google Assistant to Microsofts Cortana to Samsungs Bixby.

Of all the tech giant, Amazon has arguably done the most to advance the A.I. assistant in the years since. Its Alexa-powered Echo speakers have not only shown the potential of these A.I. assistants; theyve demonstrated that theyre compelling enough to exist as standalone pieces of hardware. Today, voice-based assistants are so commonplace they barely even register. Ten years ago most people had never used one.

Deep learning neural networks are not wholly an invention of the 2010s. The basis for todays artificial neural networks traces back to a 1943 paper by researchers Warren McCulloch and Walter Pitts. A lot of the theoretical work underpinning neural nets, such as the breakthrough backpropagation algorithm, were pioneered in the 1980s. Some of the advances that lead directly to modern deep learning were carried out in the first years if the 2000s with work like Geoff Hintons advances in unsupervised learning.

But the 2010s are the decade the technology went mainstream. In 2010,researchers George Dahl and Abdel-rahman Mohamed demonstrated that deep learning speech recognition tools could beat what were then the state-of-the-art industry approaches. After that, the floodgates were opened.From image recognition (example: Jeff Dean and Andrew Ngs famous paper on identifying cats) to machine translation, barely a week went by when the world wasnt reminded just how powerful deep learning could be.

It wasnt just a good PR campaign either, the way an unknown artist might finally stumble across fame and fortune after doing the same way in obscurity for decades. The 2010s are the decade in which the quantity of data exploded, making it possible to leverage deep learning in a way that simply wouldnt have been possible at any previous point in history.

Of all the companies doing amazing AI work, DeepMind deserves its own entry on this list. Founded in September 2010, most people hadnt heard of deep learning company DeepMind until it was bought by Google for what seemed like a bonkers $500 million in January 2014. DeepMind has made up for it in the years since, though.

Much of DeepMinds most public-facing work has involved the development of game-playing AIs, capable of mastering computer games ranging from classic Atari titles like Breakout and Space Invaders (with the help of some handy reinforcement learning algorithms) to, more recently, attempts at StarCraft II and Quake III Arena.

Demonstrating the core tenet of machine learning, these game-playing A.I.s got better the more they played. In the process, they were able to form new strategies that, in some cases, even their human creators werent familiar with. All of this work helped set the stage for DeepMinds biggest success of all

As this list has already shown, there are no shortage of examples when it comes to A.I. beating human players at a variety of games. But Go, a Chinese board game in which the aim is to surround more territory than your opponent, was different. Unlike other games in which players could be beaten simply by number crunching faster than humans are capable of, in Go the total number of allowable board positions is mind-bogglingly staggering: far more than the total number of atoms in the universe. That makes brute force attempts to calculate answers virtually impossible, even using a supercomputer.

Nonetheless, DeepMind managed it. In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 1919 board. The next year, 60 million people tuned in live to see the worlds greatest Go player, Lee Sedol, lose to AlphaGo. By the end of the series AlphaGo had beaten Sedol four games to one.

In November 2019, Sedol announced his intentions to retire as a professional Go player. He cited A.I. as the reason.Even if I become the number one, there is an entity that cannot be defeated, he said.Imagine if Lebron James announced he was quitting basketball because a robot was better at shooting hoops that he was. Thats the equivalent!

In the first years of the twenty-first century, the idea of an autonomous car seemed like it would never move beyond science fiction. In MIT and Harvard economists Frank Levy and Richard Murnanes 2004 book The New Division of Labor, driving a vehicle was described as a task too complex for machines to carry out. Executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a drivers behavior, they wrote.

In 2010, Google officially unveiled its autonomous car program, now called Waymo. Over the decade that followed, dozens of other companies (including tech heavy hitters like Apple) have started to develop their own self-driving vehicles. Collectively these cars have driven thousands of miles on public roads; apparently proving less accident-prone than humans in the process.

Foolproof full autonomy is still a work-in-progress, but this was nonetheless one of the most visible demonstrations of A.I. in action during the 2010s.

The dirty secret of much of todays A.I. is that its core algorithms, the technologies that make it tick, were actually developed several decades ago. Whats changed is the processing power available to run these algorithms and the massive amounts of data they have to train on. Hearing about a wholly original approach to building A.I. tools is therefore surprisingly rare.

Generative adversarial networks certainly qualify. Often abbreviated to GANs, this class of machine learning system was invented by Ian Goodfellow and colleagues in 2014. No less an authority than A.I. expert Yann LeCun has described it as the coolest idea in machine learning in the last twenty years.

At least conceptually, the theory behind GANs is pretty straightforward: take two cutting edge artificial neural networks and pit them against one another. One network creates something, such as a generated image. The other network then attempts to work out which images are computer-generated and which are not. Over time, the generative adversarial process allows the generator network to become sufficiently good at creating images that they can successfully fool the discriminator network every time.

The power of Generative Adversarial Networks were seen most widely when a collective of artists used them to create original paintings developed by A.I. The result sold for a shockingly large amount of money at a Christies auction in 2018.

Original post:

Revisiting the rise of A.I.: How far has artificial intelligence come since 2010? - Digital Trends

Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence – Imaging Technology News

December 27, 2019 Artificial intelligence (AI) technology developed by the RIKEN Center for Advanced Intelligence Project (AIP) in Japan has successfully found features in pathology images from human cancer patients, without annotation, that could be understood by human doctors. Further, the AI identified features relevant to cancer prognosis that were not previously noted by pathologists, leading to a higher accuracy of prostate cancer recurrence compared to pathologist-based diagnosis. Combining the predictions made by the AI with predictions by human pathologists led to an even greater accuracy.

According to Yoichiro Yamamoto, M.D., Ph.D., the first author of the study published in Nature Communications, "This technology could contribute to personalized medicine by making highly accurate prediction of cancer recurrence possible by acquiring new knowledge from images. It could also contribute to understanding how AI can be used safely in medicine by helping to resolve the issue of AI being seen as a 'black box.'"

The research group led by Yamamoto and Go Kimura, in collaboration with a number of university hospitals in Japan, adopted an approach called "unsupervised learning." As long as humans teach the AI, it is not possible to acquire knowledge beyond what is currently known. Rather than being "taught" medical knowledge, the AI was asked to learn using unsupervised deep neural networks, known as autoencoders, without being given any medical knowledge. The researchers developed a method for translating the features found by the AI only numbers initially into high-resolution images that can be understood by humans.

To perform this feat the group acquired 13,188 whole-mount pathology slide images of the prostate from Nippon Medical School Hospital (NMSH), The amount of data was enormous, equivalent to approximately 86 billion image patches (sub-images divided for deep neural networks), and the computation was performed on AIP's powerful RAIDEN supercomputer.

The AI learned using pathology images without diagnostic annotation from 11 million image patches. Features found by AI included cancer diagnostic criteria that have been used worldwide, on the Gleason score, but also features involving the stroma connective tissues supporting an organ in non-cancer areas that experts were not aware of. In order to evaluate these AI-found features, the research group verified the performance of recurrence prediction using the remaining cases from NMSH (internal validation). The group found that the features discovered by the AI were more accurate (AUC=0.820) than predictions made based on the human-established cancer criteria developed by pathologists, the Gleason score (AUC=0.744). Furthermore, combining both AI-found features and the human-established criteria predicted the recurrence more accurately than using either method alone (AUC=0.842). The group confirmed the results using another dataset including 2,276 whole-mount pathology images (10 billion image patches) from St. Marianna University Hospital and Aichi Medical University Hospital (external validation).

"I was very happy," said Yamamoto, "to discover that the AI was able to identify cancer on its own from unannotated pathology images. I was extremely surprised to see that AI found features that can be used to predict recurrence that pathologists had not identified."

He continued, "We have shown that AI can automatically acquire human-understandable knowledge from diagnostic annotation-free histopathology images. This 'newborn' knowledge could be useful for patients by allowing highly-accurate predictions of cancer recurrence. What is very nice is that we found that combining the AI's predictions with those of a pathologist increased the accuracy even further, showing that AI can be used hand-in-hand with doctors to improve medical care. In addition, the AI can be used as a tool to discover characteristics of diseases that have not been noted so far, and since it does not require human knowledge, it could be used in other fields outside medicine."

For more information:www.riken.jp/en/research/labs/aip/

See the article here:

Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence - Imaging Technology News

Artificial intelligence is helping us talk to animals (yes, really) – Wired.co.uk

Each time any of us uses a tool, such as Gmail, where theres a powerful agent to help correct our spellings, and suggest sentence endings, theres an AI machine in the background, steadily getting better and better at understanding language. Sentence structures are parsed, word choices understood, idioms recognised.

That exact capability could, in 2020, grant the ability to speak with other large animals. Really. Maybe even faster than brain-computer interfaces will take the stage.

Our AI-enhanced abilities to decode languages have reached a point where they could start to parse languages not spoken by anyone alive. Recently, researchers from MIT and Google applied these abilities to ancient scripts Linear B and Ugaritic (a precursor of Hebrew) with reasonable success (no luck so far with the older, and as-yet undeciphered Linear A).

First, word-to-word relations for a specific language are mapped, using vast databases of text. The system searches texts to see how often each word appears next to every other word. This pattern of appearances is a unique signature that defines the word in a multidimensional parameter space. Researchers estimate that languages all languages can be best described as having 600 independent dimensions of relationships, where each word-word relationship can be seen as a vector in this space. This vector acts as a powerful constraint on how the word can appear in any translation the machine comes up with.

These vectors obey some simple rules. For example: king man + woman = queen. Any sentence can be described as a set of vectors that in turn form a trajectory through the word space.

These relationships persist even when a language has multiple words for related concepts: the famed near-100 words Inuits have for snow will all be in similar dimensional spaces each time someone talks about snow, it will always be in a similar linguistic context.

Take a leap. Imagine that whale songs are communicating in a word-like structure. Then, what if the relationships that whales have for their ideas have dimensional relationships similar to those we see in human languages?

That means we should be able to map key elements of whale songs to dimensional spaces, and thus to comprehend what whales are talking about and perhaps to talk to and hear back from them. Remember: some whales have brain volumes three times larger than adult humans, larger cortical areas, and lower but comparable neuron counts. African elephants have three times as many neurons as humans, but in very different distributions than are seen in our own brains. It seems reasonable to assume that the other large mammals on earth, at the very least, have thinking and communicating and learning attributes we can connect with.

What are the key elements of whale songs and of elephant sounds? Phonemes? Blocks of repeated sounds? Tones? Nobody knows, yet, but at least the journey has begun. Projects such as the Earth Species Project aim to put the tools of our time particularly artificial intelligence, and all that we have learned in using computers to understand our own languages to the awesome task of hearing what animals have to say to each other, and to us.

There is something deeply comforting to think that AI language tools could do something so beautiful, going beyond completing our emails and putting ads in front of us, to knitting together all thinking species. That, we perhaps can all agree, is a better and perhaps nearer-term ideal to reach than brain-computer communications. The beauty of communicating with them will then be joined to the market ideal of talking to our pet dogs. (Cats may remain beyond reach.)

Mary Lou Jepsen is the founder and CEO of Openwater. John Ryan, her husband, is a former partner at Monitor Group

The illegal trade of Siberian mammoth tusks revealed

I ditched Google for DuckDuckGo. Here's why you should too

How to use psychology to get people to answer your emails

The WIRED Recommends guide to the best Black Friday deals

Get The Email from WIRED, your no-nonsense briefing on all the biggest stories in technology, business and science. In your inbox every weekday at 12pm sharp.

by entering your email address, you agree to our privacy policy

Thank You. You have successfully subscribed to our newsletter. You will hear from us shortly.

Sorry, you have entered an invalid email. Please refresh and try again.

Here is the original post:

Artificial intelligence is helping us talk to animals (yes, really) - Wired.co.uk

Quantum leap: Why we first need to focus on the ethical challenges of artificial intelligence – Economic Times

By Vivek Wadhwa

AI has the potential to be as transformative to the world as electricity, by helping us understand the patterns of information around us. But it is not close to living up to the hype. The super-intelligent machines and runaway AI that we fear are far from reality; what we have today is a rudimentary technology that requires lots of training. Whats more, the phrase artificial intelligence might be a misnomer because human intelligence and spirit amount to much more than what bits and bytes can encapsulate.

I encourage readers to go back to the ancient wisdoms of their faith to understand the role of the soul and the deeper self. This is what shapes our consciousness and makes us human, what we are always striving to evolve and perfect. Can this be uploaded to the cloud or duplicated with computer algorithms? I dont think so.

What about the predictions that AI will enable machines to have human-like feeling and emotions? This, too, is hype. Love, hate and compassion arent things that can be codified. Not to say that a machine interaction cant seem human we humans are gullible, after all. According to Amazon, more than 1 million people had asked their Alexa-powered devices to marry them in 2017 alone. I doubt those marriages, should Alexa agree, would last very long!

Todays AI systems do their best to replicate the functioning of the human brains neural networks, but their emulations are very limited. They use a technique called Deep Learning. After you tell a machine exactly what you want it to learn and provide it with clearly labelled examples, it analyses the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data. So the more examples you give it, the more useful it becomes.

Herein lies a problem, though an AI system is only as good as the data it receives. It is able to interpret them only within the narrow confines of the supplied context. It doesnt understand what it has analysed so it is unable to apply its analysis to other scenarios. And it cant distinguish causation from correlation.

AI shines in performing tasks that match patterns in order to obtain objective outcomes. Examples of what it does well include playing chess, driving a car on a street and identifying a cancer lesion in a mammogram. These systems can be incredibly helpful extensions of how humans work, and with more data, the systems will keep improving. Although an AI machine may best a human radiologist in spotting cancer, it will not, for many years to come, replicate the wisdom and perspective of the best human radiologists. And it wont be able to empathise with a patient in the way that a doctor does. This is where AI presents its greatest risk and what we really need to worry about use of AI in tasks that may have objective outcomes but incorporate what we would normally call judgement. Some such tasks exercise much influence over peoples lives. Granting a loan, admitting a student to a university, or deciding whether children should be separated from their birth parents due to suspicions of abuse falls into this category. Such judgements are highly susceptible to human biases but they are biases that only humans themselves have the ability to detect.

And AI throws up many ethical dilemmas around how we use technology. It is being used to create killing machines for the battlefield with drones which can recognise faces and attack people. China is using AI for mass surveillance, and wielding its analytical capabilities to assign each citizen a social credit based on their behaviour. In America, AI is mostly being built by white people and Asians. So, it amplifies their inbuilt biases and misreads African Americans. It can lead to outcomes that prefer males over females for jobs and give men higher loan amount than women. One of the biggest problems we are facing with Facebook and YouTube is that you are shown more and more of the same thing based on your past views, which creates filter bubbles and a hotbed of misinformation. Thats all thanks to AI.

Rather than worrying about super-intelligence, we need to focus on the ethical issues about how we should be using this technology. Should it be used to recognise the faces of students who are protesting against the Citizenship (Amendment) Act? Should India install cameras and systems like China has? These are the types of questions the country needs to be asking.The writer is a distinguished fellow and professor, Carnegie Mellon Universitys College of Engineering, Silicon Valley.

This story is part of the 'Tech that can change your life in the next decade' package.

Here is the original post:

Quantum leap: Why we first need to focus on the ethical challenges of artificial intelligence - Economic Times

Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the U.S. Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Bob Kocher, a partner at the venture capital firm Venrock, are more blunt. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

Relaxed AI Standards At The FDA

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Jesse Ehrenfeld, who chairs the physician groups board of trustees.In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

When Good Algorithms Go Bad

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation that is not affiliated with Kaiser Permanente.

View original post here:

Artificial Intelligence Is Rushing Into Patient Care - And Could Raise Risks - Scientific American

AI IN BANKING: Artificial intelligence could be a near $450 billion opportunity for banks – here are the strat – Business Insider India

Discussions, articles, and reports about the AI opportunity across the financial services industry continue to proliferate amid considerable hype around the technology, and for good reason: The aggregate potential cost savings for banks from AI applications is estimated at $447 billion by 2023, with the front and middle office accounting for $416 billion of that total, per Autonomous Next research seen by Business Insider Intelligence.

Most banks (80%) are highly aware of the potential benefits presented by AI, per an OpenText survey of financial services professionals. In fact, many banks are planning to deploy solutions enabled by AI: 75% of respondents at banks with over $100 billion in assets say they're currently implementing AI strategies, compared with 46% at banks with less than $100 billion in assets, per a UBS Evidence Lab report seen by Business Insider Intelligence. Certain AI use cases have already gained prominence across banks' operations, with chatbots in the front office and anti-payments fraud in the middle office the most mature.

The companies mentioned in this report are: Capital One, Citi, HSBC, JPMorgan Chase, Personetics, Quantexa, and U.S. Bank

Here are some of the key takeaways from the report:

In full, the report:

Interested in getting the full report? Here are two ways to access it:

Originally posted here:

AI IN BANKING: Artificial intelligence could be a near $450 billion opportunity for banks - here are the strat - Business Insider India

The Power Of Purpose: How We Counter Hate Used Artificial Intelligence To Battle Hate Speech Online – Forbes

We Counter Hate

One of the most fascinating examples of social innovation Ive been tracking recently was the We Counter Hate platform, by Seattle-based agency Wunderman Thompson Seattle (formerly POSSIBLE) that sought to reduce hate speech on Twitter by turning retweets of these hateful messages into donations for a good cause.

Heres how it worked: Using machine learning, it first identified hateful speech on the platform. A human moderator then selected the most offensive and most dangerous tweets and attached an undeletable reply, which informed recipients that if they retweet the message, a donation will be committed to an anti-hate group. In a beautiful twist this non-profit wasLife After Hate, a group that helps members of extremist groups leave and transition to mainstream life.

Unfortunately (and ironically) on the very day I reached out to the team, Twitter decided to allow users to hide replies in their feeds in an effort to empower people faced with bullying and harassment, eliminating the reply function which was the main mechanism that gave #WeCounterHate its power and led to it being able to remove more than 20M potentialhatespeech impressions.

Undeterred, I caught up with some members of the core teamShawn Herron, Jason Carmel and Matt Gilmoreto find out more about their journey.

(From left to right)Shawn Herron, Experience Technology Director @ Wunderman ThompsonMatt ... [+] Gilmore, Creative Director @ Wunderman ThompsonJason Carmel, Chief Data Officer @ Wunderman Thompson

Afdhel Aziz: Gentlemen, welcome. How did the idea for WeCounterHate come about?

Shawn Herron: It started when we caught wind of what the citizens of the town of Wunsiedel, Germany were doing to combat the annual extremists that were descending on their town every year to hold rally and march through the streets. The towns people had devised a peaceful way to upend the extremists efforts by turning their hateful march into an involuntary walk-a-thon that benefitted EXIT Deutschland, an organization that helps people escape extremist groups. For every meter the neo Nazis marched 10 euro would be donated to Exit Deutschland. The question became, how can we scale something like that so anyone, anywhere, could have the ability to fight against hate in a meaningful way?

Jason Carmel: We knew that, to create scale, it had to be digital in nature and Twitter seemed like the perfect problem in need of a solution. We figured if we could reduce hate on a platform of that magnitude, even a small percentage, it could have a big impact. We started by developing an innovative machine-learning and natural-language processing technology that could identify and classify hate speech.

Matt Gilmore: But we still needed the mechanic, a catch 22, that would present those looking to spread hate on the platform with a no-win decision to make. Thats when we stumbled onto the fact that Twitter didnt allow people to delete comments on their tweets. The only way to remove a comment was to delete the post entirely. That mechanic is what gave us a way put a permanent marker, in the form of an image and message, on tweets containing hate speech. Its that permanent marker that let those looking to retweet, and spread hate, know that doing so would benefit an organization theyre opposed to, Life After Hate. No matter what they chose to do, love wins.

Aziz: Fascinating. So, what led you to the partnership with Life After Hate and how did that work?

Carmel: Staffed and founded by former hate group members and violent extremists, Life After Hate is a non-profit that helps people in extremist groups break from that hate-filled lifestyle. They offer a welcoming way out thats free of judgement.We collaborated with them in training the AI thats used to identify hate speech in near real time on Twitter. With the benefit of their knowledge our AI can even find hidden forms of hate speech (coded language, secret emoji combinations) in a vast sea of tweets. Their expertise was crucial to align the language we used when countering hate, making it more compassionate and matter-of-fact, rather than confrontational.

Herron: Additionally, their partnership just made perfect sense on a conceptual level as the beneficiary of the effort. If youre one of those people looking to spread hate on Twitter, youre much less likely to hit retweet knowing that youll be benefiting an organization youre opposed to.

Aziz: Was it hard to wade through that much hate speech? What surprised you?

Herron: Being exposed to all the hate filled tweets was easily the most difficult part of the whole thing. The human brain is not wired to read and see the kinds of messages we encountered for long periods of time. At the end of the countering process, after the AI identified hate, we always relied on a human moderator to validate it before countering/tagging it. We broke up the shifts between many volunteers, but it was always quite difficult when it was your shift.

Carmel: We learned that the identification of hate speech was much easier than categorizing it. Or initial understanding of hate speech, especially before Life After Hate helped us, was really just the movie version of hate speech and missed a lot of hidden context. We were also surprised at how much the language would evolve relative to current events. It was definitely something we had to stay on top of.

We were surprised by how broad a spectrum of people the hate was coming from. We went in thinking wed just encounter a bunch of thugs, but many of these people held themselves out as academics, comedians, or historians. The brands of hate some of them shared were nuanced and, in an insidious way, very compelling.

We were caught off guard by the amount of time and effort those who disliked our platform would take to slam or discredit it. A lot of these people are quite savvy and would go to great lengths to attempt to undermine our efforts. Outside of the things we dealt with in Twitter, one YouTube hate-fluencer made a video, close to an hour long, that wove all sorts of intricate theories and conspiracies about our platform.

Gilmore: We were also surprised by how wrong our instincts were. When we first started, the things we were seeing made us angry and frustrated. We wanted to come after these hateful people in an aggressive way. We wanted to fight back. Life After Hate was essential in helping course-correct our tone and message. They helped us understand (and wed like more people to know) the power of empathy combined with education, and its ability to remove walls rather than build them between people. It can be difficult to take this approach, but it ultimately gets everyone to a better place.

Aziz: I love that idea empathy with education. What were the results of the work youve done so far? How did you measure success?

Carmel: The WeCounterHate platform radically outperformed expectations of identifying hate speech (91% success) relative to a human moderator, as we continued to improve the model over the course of the project.

When @WeCounterHatereplied to a tweet containing hate, it reduces the spread of that hate by an average of 54%. Furthermore, 19% of the "hatefluencers" deleted their original tweet outright once it had been countered.

By our estimates, the Hate Tweets we countered were shared roughly 20 million fewer times compared to similar Hate Tweets by the same authors that werent countered.

Matt: It was a pretty mind-bending exercise for people working in an ad agency, that have spent our entire careers trying to gain exposure for the work we do on behalf of clients, to suddenly be trying to reduce impressions. We even began referring to WCH as the worlds first reverse-media plan, designed to reduce impressions by stopping retweets.

Aziz: So now that the project has ended, how do you hope to take this idea forward in an open source way?

Herron: Our hope was to counter hate speech online, while collecting insightful data about how hate speech online propagates. Going forward, hopefully this data will allow experts in the field to address the hate speech problem at a more systemic level. Our goal is to publicly open source archived data that has been gathered, hopefully next quarter (Q1 2020)

I love this idea on so many different levels. The ingenuity of finding a way to counteract hate speech without resorting to censorship. The partnership with Life After Hate to improve the sophistication of the detection. And the potential for this same model to be applied to so many different problems in the world (*anyone want to build a version for climate change deniers?). It proves that the creativity of the advertising world can truly be turned into a force for good, and for that I salute the team for showing us this powerful act of moral imagination.

Go here to see the original:

The Power Of Purpose: How We Counter Hate Used Artificial Intelligence To Battle Hate Speech Online - Forbes

The skills needed to land the hottest tech job of 2020 – Business Insider Nordic

Artificial intelligence is one of the hottest topics in corporate America. So it's no surprise that companies are rushing to find the talent to support the push to adopt the advanced tech.

Demand for AI specialists grew 74% in the last five years and is expected to be one of the most highly sought-after roles in 2020, according to a new study from LinkedIn. Among the necessary skills for the position are machine learning and natural language processing.

But it's not just AI experts that are in high-demand. Cloud engineers, developers, cybersecurity experts, and data scientists also made the list. Alongside the individuals needed to support the technology, companies are also seeking leaders, like a chief transformation officer and chief culture officer, to oversee the adoption. Even non-tech positions like managing the customer experience a key focus for many digital overhauls are hot positions for 2020.

Those projections indicate just how aggressively organizations are trying to adopt more sophisticated technology, but also the major problem they face in navigating the skills gap and the tight labor market.

A struggle, however, will be finding the talent to fill the vacancies. One way companies are tackling that challenge is by upskilling their current employees.

Jeff McMillan, the chief data and analytics officer for Morgan Stanley's wealth management division, runs an internal AI boot camp that covers the basics of the technology. And Microsoft and others are working with online educational platforms like OpenClassrooms to craft comprehensive curriculum to give existing workers the chance to train for new jobs within the organization.

With tech-heavy skills in such short supply, some experts even suggest that corporations should appoint a "chief reskilling" officer to manage the push to reskill employees. "What this new role will be doing is future thinking, future strategy, future alignment with talent and people," Jason Wingard, the dean of the School of Professional Studies at Columbia University, previously told Business Insider.

While investments in larger, enterprise-wide AI projects could slip in 2020, the push to adopt the tech will remain fervent, creating a lucrative job market for those who have the skills to support the shift.

Go here to read the rest:

The skills needed to land the hottest tech job of 2020 - Business Insider Nordic

In 2020, lets stop AI ethics-washing and actually do something – MIT Technology Review

Last year, just as I was beginning to cover artificial intelligence, the AI world was getting a major wake-up call. There were some incredible advancements in AI research in 2018from reinforcement learning to generative adversarial networks (GANs) to better natural-language understanding. But the year also saw several high-profile illustrations of the harm these systems can cause when they are deployed too hastily.

A Tesla crashed on Autopilot, killing the driver, and a self-driving Uber crashed, killing a pedestrian. Commercial face recognition systems performed terribly in audits on dark-skinned people, but tech giants continued to peddle them anyway, to customers including law enforcement. At the beginning of this year, reflecting on these events, I wrote a resolution for the AI community: Stop treating AI like magic, and take responsibility for creating, applying, and regulating it ethically.

In some ways, my wish did come true. In 2019, there was more talk of AI ethics than ever before. Dozens of organizations produced AI ethics guidelines; companies rushed to establish responsible AI teams and parade them in front of the media. Its hard to attend an AI-related conference anymore without part of the programming being dedicated to an ethics-related message: How do we protect peoples privacy when AI needs so much data? How do we empower marginalized communities instead of exploiting them? How do we continue to trust media in the face of algorithmically created and distributed disinformation?

Sign up for The Algorithm artificial intelligence, demystified

But talk is just thatits not enough. For all the lip service paid to these issues, many organizations AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. Were falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. In the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a couple of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.

Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode peoples belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborerscontent moderators, data labelers, transcriberswho toil away in often brutal conditions.

But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several citiesincluding San Francisco and Oakland, California, and Somerville, Massachusettsbanned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies use of AI for tracking migrants and for drone surveillance.

Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the fields runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislationmeant to rein in unintended consequences without dampening innovation. At the largest annual gathering in the field this year, I was both touched and surprised by how many of the keynotes, workshops, and posters focused on real-world problemsboth those created by AI and those it could help solve.

So here is my hope for 2020: that industry and academia sustain this momentum and make concrete bottom-up and top-down changes that realign AI development. While we still have time, we shouldnt lose sight of the dream animating the field. Decades ago, humans began the quest to build intelligent machines so they could one day help us solve some of our toughest challenges.

AI, in other words, is meant to help humanity prosper. Lets not forget.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

See the rest here:

In 2020, lets stop AI ethics-washing and actually do something - MIT Technology Review

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

The past decade has seen the rise of remarkably human personal assistants, increasing automation in transportation and industrial environments, and even the alleged passing of Alan Turings famous robot consciousness test. Such innovations have taken artificial intelligence out labs and into our hands.

A.I. programs have become painters, drivers, doctors assistants, and even friends. But with these new benefits have also come increasing dangers. This ending decade saw the first, and likely not the last, death caused by a self-driving car.

This is #20 on Inverses 20 predictions for the 2020s.

And as we head toward another decade of machine learning and robotics research, questions surrounding the moral programming of A.I. and the limits of their autonomy will no longer be just thought-experiments but time-sensitive problem.

One such area to keep on eye on going forward into a new decade will be partially defined by this question: what kind of legal status will A.I. be granted as their capabilities and intelligence continues to scale closer to that of humans? This is a conversation the archipelago nation Malta started in 2018 when its leaders proposed that it should prepare to grant or deny citizenship to A.I.s just as they would humans.

The logic behind this being that A.I.s of the future could have just as much agency and potential to cause disruption as any other non-robotic being. Francois Piccione, policy advisor for the Maltese government, told Inverse in 2019 that not taking such measures would be irresponsible.

Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity, said Piccione. To realize that such a revolution is taking place and not do ones best to prepare for it would be irresponsible.

While the 2020s might not see fully fledged citizenship for A.I.s, Inverse predicts that there will be increasing legal scrutiny in coming years over who is legally responsible over the actions of A.I., whether it be their owners or the companies designing them. Instead of citizenship or visas for A.I., this could lead to further restrictions on the humans who travel with them and the ways in which A.I. can be used in different settings.

Another critical point of increasing scrutiny in the coming years will be how to ensure A.I. programmers continue to think critically about the algorithms they design.

This past decade saw racism and death as the result of poorly designed algorithms and even poorer introspection. Inverse predicts that as A.I. continues to scale labs will increasingly call upon outside experts, such as ethicists and moral psychologists, to make sure these human-like machines are not doomed to repeat our same, dehumanizing, mistakes.

As 2019 draws to a close, Inverse is looking to the future. These are our 20 predictions for science and technology for the 2020s. Some are terrifying, some are fascinating, and others we can barely wait for. This has been #20. Read a related story here.

Read this article:

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test - Inverse

Samsung to announce its Neon artificial intelligence project at CES 2020 – Firstpost

tech2 News StaffDec 26, 2019 17:21:10 IST

Samsung has been teasing Neon for quite a while on social media. It appears to be an artificial intelligence (AI) project by its research arm and the company will be announcing more details about it during CES 2020 in January.

Samsung Neon AI project. Image: Neon

Neon hasnt really revealed any details. Its being developed under Samsung Technology & Advanced Research Labs (STAR Labs). STAR Labs could be a reference to the Scientific and Technological Advanced Research Laboratories (STAR Labs) from DC Comics, but we cant confirm that. Samsungs research division is led by Pranav Mistry who earlier worked on the Samsung Galaxy Gear and is now the President and CEO of STAR Labs.

The company has set up a website with a landing page that doesnt really mention any details. It only has a message saying, Have you ever met an Artificial? It has been continuously posting images on Twitter and Instagram, including a couple of videos. These images contain the same message in different languages as well, indicating that the AI has multilingual functionality. Mistry has also been teasing Neon on his own Twitter account.

This wont be Samsungs first venture into AI since it already has the Bixby digital assistant. However, it never really took off. CES 2020 begins on 7 January and well get to know more about Neon during the expo.

Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.

Read more:

Samsung to announce its Neon artificial intelligence project at CES 2020 - Firstpost

How Artificial Intelligence Is Totally Changing Everything – HowStuffWorks

Advertisement

Back in Oct. 1950, British techno-visionary Alan Turing published an article called "Computing Machinery and Intelligence," in the journal MIND that raised what at the time must have seemed to many like a science-fiction fantasy.

"May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" Turing asked.

Turing thought that they could. Moreover, he believed, it was possible to create software for a digital computer that enabled it to observe its environment and to learn new things, from playing chess to understanding and speaking a human language. And he thought machines eventually could develop the ability to do that on their own, without human guidance. "We may hope that machines will eventually compete with men in all purely intellectual fields," he predicted.

Nearly 70 years later, Turing's seemingly outlandish vision has become a reality. Artificial intelligence, commonly referred to as AI, gives machines the ability to learn from experience and perform cognitive tasks, the sort of stuff that once only the human brain seemed capable of doing.

AI is rapidly spreading throughout civilization, where it has the promise of doing everything from enabling autonomous vehicles to navigate the streets to making more accurate hurricane forecasts. On an everyday level, AI figures out what ads to show you on the web, and powers those friendly chatbots that pop up when you visit an e-commerce website to answer your questions and provide customer service. And AI-powered personal assistants in voice-activated smart home devices perform myriad tasks, from controlling our TVs and doorbells to answering trivia questions and helping us find our favorite songs.

But we're just getting started with it. As AI technology grows more sophisticated and capable, it's expected to massively boost the world's economy, creating about $13 trillion worth of additional activity by 2030, according to a McKinsey Global Institute forecast.

"AI is still early in adoption, but adoption is accelerating and it is being used across all industries," says Sarah Gates, an analytics platform strategist at SAS, a global software and services firm that focuses upon turning data into intelligence for clients.

It's even more amazing, perhaps, that our existence is quietly being transformed by a technology that many of us barely understand, if at all something so complex that even scientists have a tricky time explaining it.

"AI is a family of technologies that perform tasks that are thought to require intelligence if performed by humans," explains Vasant Honavar, a professor and director of the Artificial Intelligence Research Laboratory at Penn State University. "I say 'thought,' because nobody is really quite sure what intelligence is."

Honavar describes two main categories of intelligence. There's narrow intelligence, which is achieving competence in a narrowly defined domain, such as analyzing images from X-rays and MRI scans in radiology. General intelligence, in contrast, is a more human-like ability to learn about anything and to talk about it. "A machine might be good at some diagnoses in radiology, but if you ask it about baseball, it would be clueless," Honavar explains. Humans' intellectual versatility "is still beyond the reach of AI at this point."

According to Honavar, there are two key pieces to AI. One of them is the engineering part that is, building tools that utilize intelligence in some way. The other is the science of intelligence, or rather, how to enable a machine to come up with a result comparable to what a human brain would come up with, even if the machine achieves it through a very different process. To use an analogy, "birds fly and airplanes fly, but they fly in completely different ways," Honavar. "Even so, they both make use of aerodynamics and physics. In the same way, artificial intelligence is based upon the notion that there are general principles about how intelligent systems behave."

AI is "basically the results of our attempting to understand and emulate the way that the brain works and the application of this to giving brain-like functions to otherwise autonomous systems (e.g., drones, robots and agents)," Kurt Cagle, a writer, data scientist and futurist who's the founder of consulting firm Semantical, writes in an email. He's also editor of The Cagle Report, a daily information technology newsletter.

And while humans don't really think like computers, which utilize circuits, semi-conductors and magnetic media instead of biological cells to store information, there are some intriguing parallels. "One thing we're beginning to discover is that graph networks are really interesting when you start talking about billions of nodes, and the brain is essentially a graph network, albeit one where you can control the strengths of processes by varying the resistance of neurons before a capacitive spark fires," Cagle explains. "A single neuron by itself gives you a very limited amount of information, but fire enough neurons of varying strengths together, and you end up with a pattern that gets fired only in response to certain kinds of stimuli, typically modulated electrical signals through the DSPs [that is digital signal processing] that we call our retina and cochlea."

"Most applications of AI have been in domains with large amounts of data," Honavar says. To use the radiology example again, the existence of large databases of X-rays and MRI scans that have been evaluated by human radiologists, makes it possible to train a machine to emulate that activity.

AI works by combining large amounts of data with intelligent algorithms series of instructions that allow the software to learn from patterns and features of the data, as this SAS primer on artificial intelligence explains.

In simulating the way a brain works, AI utilizes a bunch of different subfields, as the SAS primer notes.

The concept of AI dates back to the 1940s, and the term "artificial intelligence" was introduced at a 1956 conference at Dartmouth College. Over the next two decades, researchers developed programs that played games and did simple pattern recognition and machine learning. Cornell University scientist Frank Rosenblatt developed the Perceptron, the first artificial neural network, which ran on a 5-ton (4.5-metric ton), room-sized IBM computer that was fed punch cards.

But it wasn't until the mid-1980s that a second wave of more complex, multilayer neural networks were developed to tackle higher-level tasks, according to Honavar. In the early 1990s, another breakthrough enabled AI to generalize beyond the training experience.

In the 1990s and 2000s, other technological innovations the web and increasingly powerful computers helped accelerate the development of AI. "With the advent of the web, large amounts of data became available in digital form," Honavar says. "Genome sequencing and other projects started generating massive amounts of data, and advances in computing made it possible to store and access this data. We could train the machines to do more complex tasks. You couldn't have had a deep learning model 30 years ago, because you didn't have the data and the computing power."

AI is different from, but related to, robotics, in which machines sense their environment, perform calculations and do physical tasks either by themselves or under the direction of people, from factory work and cooking to landing on other planets. Honavar says that the two fields intersect in many ways.

"You can imagine robotics without much intelligence, purely mechanical devices like automated looms," Honavar says. "There are examples of robots that are not intelligent in a significant way." Conversely, there's robotics where intelligence is an integral part, such as guiding an autonomous vehicle around streets full of human-driven cars and pedestrians.

"It's a reasonable argument that to realize general intelligence, you would need robotics to some degree, because interaction with the world, to some degree, is an important part of intelligence," according to Honavar. "To understand what it means to throw a ball, you have to be able to throw a ball."

AI quietly has become so ubiquitous that it's already found in many consumer products.

"A huge number of devices that fall within the Internet of Things (IoT) space readily use some kind of self-reinforcing AI, albeit very specialized AI," Cagle says. "Cruise control was an early AI and is far more sophisticated when it works than most people realize. Noise dampening headphones. Anything that has a speech recognition capability, such as most contemporary television remotes. Social media filters. Spam filters. If you expand AI to cover machine learning, this would also include spell checkers, text-recommendation systems, really any recommendation system, washers and dryers, microwaves, dishwashers, really most home electronics produced after 2017, speakers, televisions, anti-lock braking systems, any electric vehicle, modern CCTV cameras. Most games use AI networks at many different levels."

AI already can outperform humans in some narrow domains, just as "airplanes can fly longer distances, and carry more people than a bird could," Honavar says. AI, for example, is capable of processing millions of social media network interactions and gaining insights that can influence users' behavior an ability that the AI expert worries may have "not so good consequences."

It's particularly good at making sense of massive amounts of information that would overwhelm a human brain. That capability enables internet companies, for example, to analyze the mountains of data that they collect about users and employ the insights in various ways to influence our behavior.

But AI hasn't made as much progress so far in replicating human creativity, Honavar notes, though the technology already is being utilized to compose music and write news articles based on data from financial reports and election returns.

Given AI's potential to do tasks that used to require humans, it's easy to fear that its spread could put most of us out of work. But some experts envision that while the combination of AI and robotics could eliminate some positions, it will create even more new jobs for tech-savvy workers.

"Those most at risk are those doing routine and repetitive tasks in retail, finance and manufacturing," Darrell West, a vice president and founding director of the Center for Technology Innovation at the Brookings Institution, a Washington-based public policy organization, explains in an email. "But white-collar jobs in health care will also be affected and there will be an increase in job churn with people moving more frequently from job to job. New jobs will be created but many people will not have the skills needed for those positions. So the risk is a job mismatch that leaves people behind in the transition to a digital economy. Countries will have to invest more money in job retraining and workforce development as technology spreads. There will need to be lifelong learning so that people regularly can upgrade their job skills."

And instead of replacing human workers, AI may be used to enhance their intellectual capabilities. Inventor and futurist Ray Kurzweil has predicted that by the 2030s, AI have achieved human levels of intelligence, and that it will be possible to have AI that goes inside the human brain to boost memory, turning users into human-machine hybrids. As Kurzweil has described it, "We're going to expand our minds and exemplify these artistic qualities that we value."

More:

How Artificial Intelligence Is Totally Changing Everything - HowStuffWorks

Artificial intelligence jobs on the rise, along with everything else AI – ZDNet

AI jobs are on the upswing, as are the capabilities of AI systems. The speed of deployments has also increased exponentially. It's now possible to train an image-processing algorithm in about a minute -- something that took hours just a couple of years ago.

These are among the key metrics of AI tracked in the latest release of theAI Index, an annual data update from Stanford University'sHuman-Centered Artificial Intelligence Institutepublished in partnership with McKinsey Global Institute. The index tracks AI growth across a range of metrics, from papers published to patents granted to employment numbers.

Here are some key measures extracted from the 290-page index:

AI conference attendance: One important metric is conference attendance, for starters. That's way up. Attendance at AI conferences continues to increase significantly. In 2019, the largest, NeurIPS, expects 13,500 attendees, up 41% over 2018 and over 800% relative to 2012. Even conferences such as AAAI and CVPR are seeing annual attendance growth around 30%.

AI jobs: Another key metric is the amount of AI-related jobs opening up. This is also on the upswing, the index shows. Looking at Indeed postings between 2015 and October 2019, the share of AI jobs in the US increased five-fold since 2010, with the fraction of total jobs rising from 0.26% of total jobs posted to 1.32% in October 2019. While this is still a small fraction of total jobs, it's worth mentioning that these are only technology-related positions working directly in AI development, and there are likely an increasingly large share of jobs being enhanced or re-ordered by AI.

Among AI technology positions, the leading category being job postings mentioning "machine learning" (58% of AI jobs), followed by artificial intelligence (24%), deep learning (9%), and natural language processing (8%). Deep learning is the fastest growing job category, growing 12-fold between 2015 and 2018. Artificial Intelligence grew by five-fold, machine learning grew by five-fold, machine learning by four-fold, and natural language processing two-fold.

Compute capacity: Moore's Law has gone into hyperdrive, the AI Index shows, with substantial progress in ramping up the computing capacity required to run AI, the index shows. Prior to 2012, AI results closely tracked Moore's Law, with compute doubling every two years. Post-2012, compute has been doubling every 3.4 months -- a mind-boggling net increase of 300,000x. By contrast, the typical two-year doubling period that characterized Moore's law previously would only yield a 7x increase, the index's authors point out.

Training time: The among of time it takes to train AI algorithms has accelerated dramatically -- it now can happen in almost 1/180th of the time it took just two years ago to train a large image classification system on a cloud infrastructure. Two years ago, it took three hours to train such a system, but by July 2019, that time shrunk to 88 seconds.

Commercial machine translation: One indicator of where AI hits the ground running is machine translation -- for example, English to Chinese. The number of commercially available systems with pre-trained models and public APIs has grown rapidly, the index notes, from eight in 2017 to over 24 in 2019. Increasingly, machine-translation systems provide a full range of customization options: pre-trained generic models, automatic domain adaptation to build models and better engines with their own data, and custom terminology support."

Computer vision: Another benchmark is accuracy of image recognition. The index tracked reporting through ImageNet, a public dataset of more than 14 million images created to address the issue of scarcity of training data in the field of computer vision. In the latest reporting, the accuracy of image recognition by systems has reached about 85%, up from about 62% in 2013.

Natural language processing: AI systems keep getting smarter, to the point they are surpassing low-level human responsiveness through natural language processing. As a result, there are also stronger standards for benchmarking AI implementations. GLUE, the General Language Understanding Evaluation benchmark, was only released in May 2018, intended to measure AI performance for text-processing capabilities. The threshold for submitted systems crossing non-expert human performance was crossed in June, 2019, the index notes. In fact, the performance of AI systems has been so dramatic that industry leaders had to release a higher-level benchmark, SuperGLUE, "so they could test performance after some systems surpassed human performance on GLUE."

Link:

Artificial intelligence jobs on the rise, along with everything else AI - ZDNet

Why Cognitive Technology May Be A Better Term Than Artificial Intelligence – Forbes

Getty

One of the challenges for those tracking the artificial intelligence industry is that, surprisingly, theres no accepted, standard definition of what artificial intelligence really is. AI luminaries all have slightly different definitions of what AI is. Rodney Brooks says that artificial intelligence doesnt mean one thing its a collection of practices and pieces that people put together. Of course, thats not particularly settling for companies that need to understand the breadth of what AI technologies are and how to apply them to their specific needs.

In general, most people would agree that the fundamental goals of AI are to enable machines to have cognition, perception, and decision-making capabilities that previously only humans or other intelligent creatures have. Max Tegmark simply defines AI as intelligence that is not biological. Simple enough but we dont fully understand what biological intelligence itself means, and so trying to build it artificially is a challenge.

At the most abstract level, AI is machine behavior and functions that mimic the intelligence and behavior of humans. Specifically, this usually refers to what we come to think of as learning, problem solving, understanding and interacting with the real-world environment, and conversations and linguistic communication. However the specifics matter, especially when were trying to apply that intelligence to solve very specific problems businesses, organizations, and individuals have.

Saying AI but meaning something else

There are certainly a subset of those pursuing AI technologies with a goal of solving the ultimate problem: creating artificial general intelligence (AGI) that can handle any problem, situation, and thought process that a human can. AGI is certainly the goal for many in the AI research being done in academic and lab settings as it gets to the heart of answering the basic question of whether intelligence is something only biological entities can have. But the majority of those who are talking about AI in the market today are not talking about AGI or solving these fundamental questions of intelligence. Rather, they are looking at applying very specific subsets of AI to narrow problem areas. This is the classic Broad / Narrow (Strong / Weak) AI discussion.

Since no one has successfully built an AGI solution, it follows that all current AI solutions are narrow. While there certainly are a few narrow AI solutions that aim to solve broader questions of intelligence, the vast majority of narrow AI solutions are not trying to achieve anything greater than the specific problem the technology is being applied to. What we mean to say here is that were not doing narrow AI for the sake of solving a general AI problem, but rather narrow AI for the sake of narrow AI. Its not going to get any broader for those particular organizations. In fact, it should be said that many enterprises dont really care much about AGI, and the goal of AI for those organizations is not AGI.

If thats the case, then it seems that the industrys perception of what AI is and where it is heading differs from what many in research or academia think. What interests enterprises most about AI is not that its solving questions of general intelligence, but rather that there are specific things that humans have been doing in the organization that they would now like machines to do. The range of those tasks differs depending on the organization and the sort of problems they are trying to solve. If this is the case, then why bother with an ill-defined term in which the original definition and goals are diverging rapidly from what is actually being put into practice?

What are cognitive technologies?

Perhaps a better term for narrow AI being applied for the sole sake of those narrow applications is cognitive technology. Rather than trying to build an artificial intelligence, enterprises are leveraging cognitive technologies to automate and enable a wide range of problem areas that require some aspect of cognition. Generally, you can group these aspects of cognition into three P categories, borrowed from the autonomous vehicles industry:

From this perspective, its clear that while cognitive technologies are indeed a subset of Artificial Intelligence technologies, with the main difference being that AI can be applied both towards the goals of AGI as well as narrowly-focused AI applications. On the other-hand, using the term cognitive technology instead of AI is an acceptance of the fact that the technology being applied borrows from AI capabilities but doesnt have ambitions of being anything other than technology applied to a narrow, specific task.

Surviving the next AI winter

The mood in the AI industry is noticeably shifting. Marketing hype, venture capital dollars, and government interest is all helping to push demand for AI skills and technology to its limits. We are still very far away from the end vision of AGI. Companies are quickly realizing the limits of AI technology and we risk industry backlash as enterprises push back on what is being overpromised and under delivered, just as we experienced in the first AI Winter. The big concern is that interest will cool too much and AI investment and research will again slow, leading to another AI Winter. However, perhaps the issue never has been with the term Artificial Intelligence. AI has always been a lofty goal upon which to set the sights of academic research and interest, much like building settlements on Mars or interstellar travel. However, just as the Space Race has resulted in technologies with broad adoption today, so too will the AI Quest result in cognitive technologies with broad adoption, even if we never achieve the goals of AGI.

Go here to see the original:

Why Cognitive Technology May Be A Better Term Than Artificial Intelligence - Forbes

What Is The Artificial Intelligence Of Things? When AI Meets IoT – Forbes

Individually, the Internet of Things (IoT) and Artificial Intelligence (AI) are powerful technologies. When you combine AI and IoT, you get AIoTthe artificial intelligence of things. You can think of internet of things devices as the digital nervous system while artificial intelligence is the brain of a system.

What Is The Artificial Intelligence Of Things? When AI Meets IoT

What is AIoT?

To fully understand AIoT, you must start with the internet of things. When things such as wearable devices, refrigerators, digital assistants, sensors and other equipment are connected to the internet, can be recognized by other devices and collect and process data, you have the internet of things. Artificial intelligence is when a system can complete a set of tasks or learn from data in a way that seems intelligent. Therefore, when artificial intelligence is added to the internet of things it means that those devices can analyze data and make decisions and act on that data without involvement by humans.

These are "smart" devices, and they help drive efficiency and effectiveness. The intelligence of AIoT enables data analytics that is then used to optimize a system and generate higher performance and business insights and create data that helps to make better decisions and that the system can learn from.

Practical Examples of AIoT

The combo of internet of things and smart systems makes AIoT a powerful and important tool for many applications. Here are a few:

Smart Retail

In a smart retail environment, a camera system equipped with computer vision capabilities can use facial recognition to identify customers when they walk through the door. The system gathers intel about customers, including their gender, product preferences, traffic flow and more, analyzes the data to accurately predict consumer behavior and then uses that information to make decisions about store operations from marketing to product placement and other decisions. For example, if the system detects that the majority of customers walking into the store are Millennials, it can push out product advertisements or in-store specials that appeal to that demographic, therefore driving up sales. Smart cameras could identify shoppers and allow them to skip the checkout like what happens in the Amazon Go store.

Drone Traffic Monitoring

In a smart city, there are several practical uses of AIoT, including traffic monitoring by drones. If traffic can be monitored in real-time and adjustments to the traffic flow can be made, congestion can be reduced. When drones are deployed to monitor a large area, they can transmit traffic data, and then AI can analyze the data and make decisions about how to best alleviate traffic congestion with adjustments to speed limits and timing of traffic lights without human involvement.

The ET City Brain, a product of Alibaba Cloud, optimizes the use of urban resources by using AIoT. This system can detect accidents, illegal parking, and can change traffic lights to help ambulances get to patients who need assistance faster.

Office Buildings

Another area where artificial intelligence and the internet of things intersect is in smart office buildings. Some companies choose to install a network of smart environmental sensors in their office building. These sensors can detect what personnel are present and adjust temperatures and lighting accordingly to improve energy efficiency. In another use case, a smart building can control building access through facial recognition technology. The combination of connected cameras and artificial intelligence that can compare images taken in real-time against a database to determine who should be granted access to a building is AIoT at work. In a similar way, employees wouldn't need to clock in, or attendance for mandatory meetings wouldn't have to be completed, since the AIoT system takes care of it.

Fleet Management and Autonomous Vehicles

AIoT is used to in fleet management today to help monitor a fleet's vehicles, reduce fuel costs, track vehicle maintenance, and to identify unsafe driver behavior. Through IoT devices such as GPS and other sensors and an artificial intelligence system, companies are able to manage their fleet better thanks to AIoT.

Another way AIoT is used today is with autonomous vehicles such as Tesla's autopilot systems that use radars, sonars, GPS, and cameras to gather data about driving conditions and then an AI system to make decisions about the data the internet of things devices are gathering.

Autonomous Delivery Robots

Similar to how AIoT is used with autonomous vehicles, autonomous delivery robots are another example of AIoT in action. Robots have sensors that gather information about the environment the robot is traversing and then make moment-to-moment decisions about how to respond through its onboard AI platform.

Visit link:

What Is The Artificial Intelligence Of Things? When AI Meets IoT - Forbes

One key to artificial intelligence on the battlefield: trust – C4ISRNet

To understand how humans might better marshal autonomous forces during battle in the near future, it helps to first consider the nature of mission command in the past.

Derived from a Prussian school of battle, mission command is a form of decentralized command and control. Think about a commander who is given an objective and then trusted to meet that goal to the best of their ability and to do so without conferring with higher-ups before taking further action. It is a style of operating with its own advantages and hurdles, obstacles that map closely onto the autonomous battlefield.

At one level, mission command really is a management of trust, said Ben Jensen, a professor of strategic studies at the Marine Corps University. Jensen spoke as part of a panel on multidomain operations at the Association of the United States Army AI and Autonomy symposium in November. Were continually moving choice and agency from the individual because of optimized algorithms helping [decision-making]. Is this fundamentally irreconcilable with the concept of mission command?

The problem for military leaders then is two-fold: can humans trust the information and advice they receive from artificial intelligence? And, related, can those humans also trust that any autonomous machines they are directing are pursuing objectives the same way people would?

To the first point, Robert Brown, director of the Pentagons multidomain task force, emphasized that using AI tools means trusting commanders to act on that information in a timely manner.

A mission command is saying: youre going to provide your subordinates the depth, the best data, you can get them and youre going to need AI to get that quality data. But then thats balanced with their own ground and then the art of whats happening, Brown said. We have to be careful. You certainly can lose that speed and velocity of decision.

Before the tools ever get to the battlefield, before the algorithms are ever bent toward war, military leaders must ensure the tools as designed actually do what service members need.

How do we create the right type of decision aids that still empower people to make the call, but gives them the information content to move faster? said Tony Frazier, an executive at Maxar Technologies.

Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

An intelligence product, using AI to provide analysis and information to combatants, will have to fall in the sweet spot of offering actionable intelligence, without bogging the recipient down in details or leaving them uninformed.

One thing thats remained consistent is folks will do one of three things with overwhelming information, Brown said. They will wait for perfect information. Theyll just wait wait, wait, theyll never have perfect information and adversaries [will have] done 10 other things, by the way. Or theyll be overwhelmed and disregard the information.

The third path users will take, Brown said, is the very task commanders want them to follow: find golden needles in eight stacks of information to help them make a decision in a timely manner.

Getting there, however, where information is empowering instead of paralyzing or disheartening, is the work of training. Adapting for the future means practicing in the future environment, and that means getting new practitioners familiar with the kinds of information they can expect on the battlefield.

Our adversaries are going to bring a lot of dilemmas our way and so our ability to comprehend those challenges and then hopefully not just react but proactively do something to prevent those actions, is absolutely critical, said Brig. Gen. David Kumashiro, the director of Joint Force Integration for the Air Force.

When a battle has thousands of kill chains, and analysis that stretches over hundreds of hours, humans have a difficult time comprehending what is happening. In the future, it will be the job of artificial intelligence to filter these threats. Meanwhile, it will be the role of the human in the loop to take that filtered information and respond as best it can to the threats arrayed against them.

What does it mean to articulate mission command in that environment, the understanding, the intent, and the trust? said Kumashiro, referring to the fast pace of AI filtering. When the highly contested environment disrupts those connections, when we are disconnected from the hive, those authorities need to be understood so that our war fighters at the farthest reaches of the tactical edge can still perform what they need to do.

Planning not just for how these AI tools work in ideal conditions, but how they will hold up under the degradation of a modern battlefield, is essential for making technology an aide, and not a hindrance, to the forces of the future.

If the data goes away, and you still got the mission, youve got to attend to it, said Brown. Thats a huge factor as well for practice. If youre relying only on the data, youll fail miserably in degraded mode.

Go here to read the rest:

One key to artificial intelligence on the battlefield: trust - C4ISRNet