AI Augmentation: The Real Future of Artificial Intelligence – Forbes

While artificial intelligence continues to drive completely autonomous technologies, its real value ... [+] comes in enhancing the capabilities of the people that use it.

I love Grammarly, the writing correction software from Grammarly, Inc. As a writer, it has proved invaluable to me time and time again, popping up quietly to say that I forgot a comma, got a bit too verbose on a sentence, or have used too many adverbs. I even sprung for the professional version.

Besides endorsing it, I bring Grammarly up for another reason. It is the face of augmentative AI. It is AI because it uses some very sophisticated (and likely recursive) algorithms to determine when grammar is being used improperly or even to provide recommendations for what may be a better way to phrase things. It is augmentative because, rather than completely replacing the need for a writer, it instead is intended to nudge the author in a particular direction, to give them a certain degree of editorial expertise so that they can publish with more confidence or reduce the workload on a copy editor.

This may sound like it eliminates the need for a copy editor, but even thats not really the case. Truth is, many copy editors also use Grammarly, and prefer that their writers do so well, because they usually prefer the much more subtle task of improving well wrought prose, rather than the tedious and maddening task of correcting grammatical and spelling errors.

As a journalist I use Ciscos Webex a great deal. Their most recent products have introduced something that Ive found to be invaluable - the ability to transcribe audio in real time. Once again, this natural language processing (NLP) capability, long the holy grail of AI, is simply there. It has turned what was once a tedious day long operation into a comparatively short editing session (no NLP is 100% accurate), meaning that I can spend more time gathering the news than having to transcribe it.

Word Cloud with NLP related tags

These examples may seem to be a far cry from the popular vision of AI as a job stealer - from autonomous cars and trucks to systems that will eliminate creatives and decision makers - but they are actually pretty indicative of where Artificial Intelligence is going. Ive written before about Adobe Photoshops Select Subject feature, which uses a fairly sophisticated AI to select that part of an image that looks like its the focus of the shot. This is an operation that can be done by hand, but it is slow, tedious and error prone. With it, Photoshop will select what I would have most of the time, and the rest can then be added relatively easily.

Whats evident from these examples is that this kind of augmentative AI can be used to do those parts of a task or operation that were high cost for very little value add otherwise. Grammarly doesnt change my voice significantly as a writer. Auto-transcription takes a task that would likely take me several hours to do manually and reduces it to seconds so that I can focus on the content. Photoshops Select Subject eliminates the need for very painstaking selection of an image. It can be argued in all three cases, that this does eliminate the need for a human being to do these tasks, but lets face it - these are tasks that nobody would prefer to do unless they really had no choice.

These kinds of instances do not flash artificial intelligence at first blush. When Microsoft Powerpoint suggests alternatives visualizations to the boring old bullet points slide, the effect is to change behavior by giving a nudge. The program is saying This looks like a pyramid, or a timeline, or a set of bucket categorizations. Why dont you use this kind of presentation?

Over time, youll notice that certain presentations float to the top more often than others, because you tend to choose them more often, though occasionally, the AI mixes things up, because it realizes through analysing your history with the app that you may be going overboard with that particular layout and should try others for variety. Grammarly (and related services such as Textio) follow grammatical rules, but use these products for a while, and youll find that the systems begin making larger and more complex recommendations that begin to match your own writing style.

You see this behavior increasingly in social media platforms, especially in longer form business messaging such as Linked-In where the recommendation engine will often provide recommended completion content that can be sentence length or longer. Yes, you are saving time, but the AI is also training you even as you train it, putting forth recommendations that sound more professional and that, by extension, teach you to prefer that form of rhetoric, to be more aware of certain grammatical constructions without necessarily knowing exactly what those constructions are.

It is this subtle interplay between human and machine agency that makes AI augmentation so noteworthy. Until comparatively recently, this capability didnt exist in the same way. When people developed applications, they created capabilities - modules - that added functionality, but that functionality was generally bounded. Auto-saving a word processing document, for instance, is not AI; it is using a simple algorithm to determine when changes were made, then providing a save call after certain activity (such as typing) stops for a specific period of time.

Word Cloud with NLP related tags

However, work with an intelligent word processor long enough and several things will begin to configure to better accommodate your writing style. Word and grammatical recommendations will begin to reflect your specific usage. Soft grammatical rules will be suppressed if you continue to ignore them, the application making the reasonable assumption that you are deliberately ignoring them when pointed out.

Ironically, this can also mean that if someone else uses your particular trained word processing application, they will likely get frustrated because the recommendations being made do not fit with their writing style, not because they are programmed to follow a given standard, but because they have been trained to facilitate your style instead.

Training is the process of providing input data into machine learning in order to establish the ... [+] parameters from subsequence categorization.

In effect, the use of augmented AI personalizes that AI - the AI becomes a friend and confidant, not just a tool. This isnt some magical, mystical computer science property. Human beings are social creatures, and when we are lonely we tend to anthropomorphize even inanimate objects around us so that we have someone to talk to. Tom Hanks, in one of his best roles to date (Cast Away), made this obvious in his humanizing of a volleyball as Wilson, an example of what TVtropes.com calls Companion Cubes, named for a similar anthropomorphized object from the Portal game franchise. Augmented AIs are examples of such companion cubes, ones that increasingly are capable of conversation and remembered history ( Hey, Siri, do you remember that beach ball in that movie we watched about a cast away who talked to it?, I think the balls name was Wilson. Why do you ask?)

Remembered history is actually a pretty good description for how most augmented AIs work. Typically, most AIs are trained to pick up anomalous behavior from a specific model, weighing both the type and weight of that anomaly and adjusting the model accordingly. In lexical analysis this includes the presence of new words or phrases and the absence of previously existing words or phrases (which are in turn kept in some form of index). A factory-reset AI will likely change fairly significantly as a user interacts with it, but over time, the model will more closely represent the optimal state for the user.

In some cases, the model itself is also somewhat self-aware, and will deliberately mutate the weightings based upon certain parameters to mix things up a bit. News filters, for instance, will normally gravitate towards a state where certain topics predominate (news about artificial intelligence or sports balls for instance, based upon a users selections), but every so often, a filter will pick up something thats three or four hops away along a topic selection graph, in order to keep the filter from being too narrow.

This, of course, also highlights one of the biggest dangers of augmenting AIs. Such filters will create an intrinsic, self selected bias in the information that gets through. If your personal bias tends to favor a certain political ideology, you get more stories (or recommendations) that favor that bias, and fewer that counter it. This can create a bubble in which what you see reinforces what you believe, while counter examples just never get through the filters. Because this affect is invisible, it may not even be obvious that it is happening, but it is one reason why periodically any AI should nudge itself out of its calculated presets.

Just as a sound mixer can be used to adjust the input weights of various audio signals, so too does ... [+] machine learning set the weights of various model parameters.

The other issue that besets augmented AIs is in the initial design of model. One of the best analogs to the way that most machine learning in particular works is to imagine a sound mixer with several dozen (or several thousand) dials that automatically adjusts themselves to determine the weights of various inputs. In an ideal world, each dial is hooked up to a variable that is independent of other variables (changing one variable doesnt effect any other variable). In reality, its not unusual for some variables to be somewhat (or even heavily) correlated, which means that if one variable changes, it causes other variables to change automatically, though not necessarily in completely known ways.

For instance, age and political affiliation might not, at first glance be obviously correlated, but as it turns out, there are subtle (and not completely linear) correlations that do tend to show up when a large enough sample of the population is taken. In a purely linear model (the domain primarily of high school linear algebra) the variables usually are completely independent, but in real life, the coupling between variables can become chaotic and non-linear unpredictably, and one of the big challenges that data scientist face is determining whether the model in question is linear within the domain being considered.

Every AI has some kind of model that determines the variables (columns) that are adjusted as learning takes place. If there are too few variables, the model may not match that well. If there are too many, the curves being delineated may be too restrictive, and if specific variables are correlated in some manner, then small variations in input can explode and create noise in the signal. This means that few models are perfect (and the ones that are perfect are too simple to be useful), and sometimes the best you can do is to keep false positives and negatives below a certain threshold.

Deep learning AIs are similar, but they essentially have the ability to determine the variables (or axes) that are most orthogonal to one another. However, this comes at a significant cost - it may be far from obvious how to interpret those variables. This explainability problem is one of the most vexing facing the field of AI, because if you dont know what a variable actually means, you cant conclusively prove that the model actually works.

Sometimes the patterns that emerge in augmented AI are not the ones we think they are.

A conversation at an Artificial Intelligence Meetup in Seattle illustrated this problem graphically. In one particular deep analysis of various patients at a given hospital, a deep learning model emerged from analysis that seemed to perfectly predict from a persons medical record if that patient had cancer. The analysts examining the (OCR-scanned) data were ecstatic, thinking theyd found a foolproof model for cancer detection, when one of the nurses working on the study pointed out to them that every cancer patients paper records had a written on one corner of the form to let the nurses quickly see who had cancer and who didnt. The AI had picked this up in the analysis, and not surprisingly it accurately predicated that if the was in that corner, the patient was sure to have cancer. Once this factor was eliminated, the accuracy rate of the model dropped considerably. (Thanks to Reza Rassool, CTO of RealNetworks, for this particular story).

Augmentation is likely to be, for some time to come, the way that most people will directly interact with artificial intelligence systems. The effects will be subtle - steadily improving the quality of the digital products that people produce, reducing the number of errors that show up, and reducing the overall time to create intellectual works - art, writing, coding, and so forth. At the same time, they raise intriguing ethical questions, such as if an AI is used to create new content, to what extent is that augmenting technology actually responsible for whats created?

It also raises serious questions about simulcra in the digital world. Daz Studio, a freemium 3D rendering and rigging software product, has recently included an upgrade that analyses portraits and generates 3D models and materials using facial recognition software. While the results are still (mostly) in the uncanny valley territory, such a tool makes it possible to create photographs and animations that can look surprisingly realistic and in many cases close enough to a person to be indistinguishable. If you think about actors, models, business people, political figures and others, you can see where these kinds of technologies can be used for political mischief.

This means that augmentation AI is also likely to be the next front of an ethical battleground, as laws, social conventions and ethics begin to catch up with the technology.

There is no question that artificial intelligence is rewriting the rules, for good and bad, and augmentation, the kind of AI that is here today and is becoming increasingly difficult to discern from human-directed software, is a proving ground for how the human/computer divide asserts itself. Pay attention to this space.

#AIAugmentation #machineLearning #deepLearning #creativity #AIethics #theCagleReport

See the rest here:

AI Augmentation: The Real Future of Artificial Intelligence - Forbes

Posted in Ai

The benefits of AI and machine learning – The Guardian

The Guardian is right to express legitimate concerns about the opacity of machine learning systems and attempts to replicate what humans do best (Editorial, 23 September), and we welcome this. However, as founders of the Institute for Ethical AI in Education (IEAIED) we believe these problems must be overcome in order to ensure people are able to benefit from artificial intelligence, not just fear it.

There are highly beneficial applications of machine learning. In education, for example, this innovation will enable personalised learning for all and is already enabling individualised learning support for increasing numbers of students. Well-designed AI can be used to identify learners particular needs so that everyone especially the most vulnerable can receive targeted support. Given the magnitude of what people have to gain from machine learning tools, we feel an obligation to mitigate and counteract the inherent risks so that the best possible outcomes can be realised.

First, we must not accept that machine learning systems have to be block-boxes whose decisions and behaviours are beyond the reach of human understanding. Explainable AI (XAI) is a rapidly developing field, and we encourage education stakeholders to demand and expect high levels of transparency. There are also further means by which we can ethically derive benefits from machine learning systems, while retaining human responsibility.

Another approach to benefiting from AI without being undermined by a lack of human oversight is to consider that AI is not bringing about these benefits single-handedly. Genuine advancement arises when AI augments and assists human-driven processes and skills. Machine learning is a powerful tool for informing strategy and decision-making, but people remain responsible for how that information is harnessed.

Incorporating ethics into the design and development of AI-driven technology is vital, and we currently rely on programmes such as UCL Educate, an accelerator for education SMEs and startups, to instil that ethos in innovation from the concept stage.

Crucially, though, we must inform the public at large about AI what it is and what benefits can be derived from its use or we risk alienating people from the technology that already forms part of their everyday lives. Worse still, we risk causing alarm and making them fearful.Prof Rose Luckin Professor of learner centred design at UCL Institute of Education and director of UCL Educate Sir Anthony Seldon Vice-chancellor, University of Buckingham Priya Lakhani Founder CEO, Century Tech

Join the debate email guardian.letters@theguardian.com

Read more Guardian letters click here to visit gu.com/letters

Do you have a photo youd like to share with Guardian readers? Click here to upload it and well publish the best submissions in the letters spread of our print edition

See more here:

The benefits of AI and machine learning - The Guardian

Posted in Ai

Making The Internet Of Things (IoT) More Intelligent With AI – Forbes

According IoT Analytics, there are over 17 Billion connected devices in the world as of 2018, with over 7 Billion of these internet of things (IoT) devices. The Internet of Things is the collection of those various sensors, devices, and other technologies that arent meant to directly interact with consumers, like phones or computers. Rather, IoT devices help provide information, control, and analytics to connect a world of hardware devices to each other and the greater internet. With the advent of cheap sensors and low cost connectivity, IoT devices are proliferating.

02 April 2019, Lower Saxony, Hannover: A so-called learning factory with a conveyor belt for sorting ... [+] products is located at the Fischertechnik stand at the Hanover Fair. From 1 to 5 April, everything at Hannover Messe will revolve around networking, learning machines and the Internet of Things. Photo: Hauke-Christian Dittrich/dpa (Photo by Hauke-Christian Dittrich/picture alliance via Getty Images)

It is no wonder that companies are inundated with data that comes from these devices and are looking to AI to help manage the devices as well as gain more insight and intelligence from the data exuded by these masses of chatty systems. However, it is much more difficult to manage and extract valuable information from these systems than we might expect. There are many aspects and subcomponents to IoT such as connectivity, security, data storage, system integration, device hardware, application development, and even networks and processes which are ever changing in this space. Another layer of complication with IoT has to do with scale of functionality. Often times, its easy to build sensors to be accessed from a smart device but to create devices that are reliable, remotely controlled and upgraded, secure, and cost effective is a much more complicated matter.

How AI is Transforming IoT

On a recent AI Today podcast, Rashmi Misra from Microsoft shared how AI and IoT are combining to provide greater visibility and control of the wide array of devices and sensors connected to the internet. At Microsoft, Rashmi leads a team that builds IoT and artificial intelligence (AI) solutions, where she works across partners of all sorts such as device manufacturers, application developers, systems integrators, and other vertically focused partners who want to play key AI technologies in IoT fields. Her Microsoft team is focused on gaining insights and knowledge from data that is created from IoT devices, simplifying the access and reporting of that data. (Disclosure: I am a host of the AI Today podcast)

IoT is transforming business models by helping companies move from simply making products and services to companies that give their customers desired outcomes. By impacting organizations business models, the combination of IoT-enabled devices and sensors with machine learning creates a collaborative and interconnected world that aligns itself around outcomes and innovation. This combination of IoT and AI is changing many industries and the relationships that businesses have with its customers. Businesses can now collect and transform data into usable and valuable information with IoT.

As an organization applies digital transformation principles to its business, the combination of IoT and AI can create a disruption within its industry. Whether an organization is using IoT and AI to engage customers, implement conversational agents for customers, customize user experiences, obtain analytics, or optimize productivity with insights and predictions, the use of IoT and AI creates a dynamic where companies are able to gain high quality insight into every piece of data, from what customers are actually looking at and touching to how employees, suppliers, and partners are interacting with different aspects of the ecosystem. Instead of just having the business processes modeled in software in a way that approximates the real world, IoT devices give systems an actual interface to the real world. Any place where you can put a sensor or a device to measure, interact, or analyze something, you can put an IoT device connected to the AI-enabled cloud to add significant amounts of value.

Using AI to Help Make Sense of IoT Data

Common challenges organizations face today with AI and IoT are with application, accessibility, and analysis of IoT data. If you have a pool of data from various sources you can run some statistical analysis with that data. But, if you want to be proactive in predicting events to to take future actions accordingly such as when to change a drill bit or anticipate a breakdown in a piece of machinery, a business needs to learn how to use these technologies to apply them to discern this kind of data and process.

The sheer quantity of IoT data, especially in organizations that have deployed sensors or tags down to the individual unit level is significant. The massive amount of constantly changing data is too difficult to manage with traditional business intelligence and analytics tools. This is where AI steps in. Through the use of unsupervised learning and clustering approaches, machine learning systems can automatically identify normal and abnormal patterns in data and alert when things deviate from observed norms, without requiring advance setup by human operators. Likewise, these AI-enabled IoT systems can automatically surface relevant insights that might not be visible for the haystack of data that makes those insights almost invisible.

Enterprises are implementing AI-enabled IoT systems in a number of different ways. Solutions firms are producing prepackaged code and templates that include tried and tested models for specific application domains such as shipping and logistics, manufacturing, energy, environmental, building and facilities operations, and other models. Others are creating customer solutions building and training their own models, leveraging cloud-providers to harness external CPU power. Some solutions centralize AI capabilities in on-premise solutions or cloud-based offerings, while others aim to decentralize AI capability, pushing machine learning models to the edge to keep the data close to the device and speed up performance. There are a number of ways to implement this technology and the challenge is in applying it and appropriately accessing it.

Today, we are seeing a lot of growth with both AI and IoT. These technologies combine to enable the next level of automation and productivity while decreasing costs. As consumers, businesses, and governments start to control IoT in a variety of environments our world will change greatly and allow us all to make better choices. Its already rapidly changing everything from retail to supply chain to health care. AI-enabled IoT is transforming the energy industry with smart energy solutions, where a city or town wants to create a delocalized power trade due to houses with solar panels. Rashmi shares an example of how IoT is changing supply chain and logistics. In that example, milk is very susceptible to changes in temperature. If you produce the milk and it gets transported from one place to another, you can use IoT to track the humidity of the environment during the transportation of the milk every step of the way. The private and public sector stand to have a huge impact by gaining more intelligence from all the devices out there.

The proliferation of IoT devices is making the future a very connected and instant access to information world. There is now need for AI to manage all those devices and make sense of the data that comes back from them. In these ways, AI and IoT are very symbiotic and will continue to have an intertwined relationship moving forward.


Making The Internet Of Things (IoT) More Intelligent With AI - Forbes

Posted in Ai

Creativity and AI: The Next Step – Scientific American

In 1997 IBMs Deep Blue famously defeated chess Grand Master Garry Kasparov after a titanic battle. It had actually lost to him the previous year, though he conceded that it seemed to possess a weird kind of intelligence. To play Kasparov, Deep Blue had been pre-programmed with intricate software, including an extensive playbook with moves for openings, middle game and endgame.

Twenty years later, in 2017, Google unleashed AlphaGo Zero which, unlike Deep Blue, was entirely self-taught. It was given only the basic rules of the far more difficult game of Gogo, without any sample games to study, and worked out all its strategies from scratch by playing millions of times against itself. This freed it to think in its own way.

These are the two main sorts of AI around at present. Symbolic machines like Deep Blue are programmed to reason as humans do, working through a series of logical steps to solve specific problems. An example is a medical diagnosis system in which a machine deduces a patients illness from data by working through a decision tree of possibilities.

Artificial neural networks like AlphaGo Zero are loosely inspired by the wiring of the neurons in the human brain and need far less human input. Their forte is learning, which they do by analyzing huge amounts of input data or rules such as the rules of chess or Gogo. They have had notable success in recognizing faces and patterns in data and also power driverless cars. The big problem is that scientists dont know as yet why they work as they do.

But its the art, literature and music that the two systems create that really points up the difference between them. Symbolic machines can create highly interesting work, having been fed enormous amounts of material and programmed to do so. Far more exciting are artificial neural networks, which actually teach themselves and which can therefore be said to be more truly creative.

Symbolic AI produces art which that is recognizable to the human eye as art, but its art which that has been pre-programmed. There are no surprises. Harold Cohens Aaron AARON algorithm produces rather beautiful paintings using templates which that have been programmed into it. Similarly, Simon Colton at the college of Goldsmiths College in the University of London programs The Painting Fool to create a likeness of a sitter in a particular style. But neither of these ever leaps beyond its program.

Artificial neural networks are far more experimental and unpredictable. The work springs from the machine itself without any human intervention. Alexander Mordvintsev set the ball rolling with his Deep Dream and its nightmare images spawned from convolutional neural networks (ConvNets) and that seem almost to spring from the machines unconscious. Then theres Ian Goodfellows GAN (Generative Adversarial Network) with the machine acting as the judge of its own creations, and Ahmed Elgammals CAN (Creative Adversarial Network), which creates styles of art never seen before. All of these generate far more challenging and difficult worksthe machines idea of art, not ours. Rather than being a tool, the machine participates in the creation.

In AI-created music the contrast is even starker. On the one hand, we have Franois Pachets Flow Machines, loaded with software to produce sumptuous original melodies, including a well-reviewed album. On the other, researchers at Google use artificial neural networks to produce music unaided. But at the moment their music tends to lose momentum after only a minute or so.

AI-created literature illustrates best of all the difference in what can be created by the two types of machines. Symbolic machines are loaded with software and rules for using it and trained to generate material of a specific sort, such as Reuters news reports and weather reports. A symbolic machine equipped with a database of puns and jokes generates more of the same, giving us, for example, a corpus of machine-generated knock-knock jokes. But as with art their literary products are in line with what we would expect.

Artificial neural networks have no such restrictions. Ross Goodwin, now at Google, trained an artificial neural network on a corpus of scripts from science fiction films, then instructed it to create sequences of words. The result was the fairly gnomic screenplay for his film Sunspring. With such a lack of constraints, artificial neural networks tend to produce work that seems obscureor should we say experimental? This sort of machine ventures into territory beyond that of our present understanding of language and can open our minds to a realm often designated as nonsense. NYUs Allison Parrish, a composer of computer poetry, explores the line between sense and nonsense. Thus, artificial neural networks can spark human ingenuity. They can introduce us to new ideas and boost our own creativity.

Proponents of symbolic machines argue that the human brain too is loaded with software, accumulated from the moment we are born, which means that symbolic machines can also lay claim to emulating the brains structure. Symbolic machines, however, are programmed to reason from the start.

Conversely, proponents of artificial neural networks argue that, like children, machines need first to learn before they can reason. Artificial neural networks learn from the data theyve been trained on but are inflexible in that they can only work from the data that they have.

To put it simply, artificial neural networks are built to learn and symbolic machines to reason but with the proper software they can each do a little of the other. An artificial neural network powering a driverless car, for example, needs to have the data for every possible contingency programmed into it so that when it sees a bright light in front of it, it can recognize whether its a bright sky or a white vehicle, in order to avoid a fatal accident.

What is needed is to develop a machine that includes the best features of both symbolic machines and artificial neural networks. Some computer scientists are currently moving in that direction, looking for options that offer a broader and more flexible intelligence than neural networks by combining them with the key features of symbolic machines.

At Deep Mind in London, scientists are developing a new sort of artificial neural network that can learn to form relationships in raw input data and represent it in logical form as a decision tree, as in a symbolic machine. In other words, theyre trying to build in flexible reasoning. In a purely symbolic machine all this would have to be programmed in by hand, whereas the hybrid artificial neural network does it by itself.

In this way combining the two systems could lead to more intelligent solutions and also to forms of art, literature andmusic which that are more accessible to human audiences while also being experimental, challenging, unpredictable and fun.

Continue reading here:

Creativity and AI: The Next Step - Scientific American

Posted in Ai

Unbridled Adoption Of Artificial Intelligence May Result In Millions Of Job Losses And Require Massive Retraining For Those Impacted – Forbes


PricewaterhouseCoopers, the large accounting and management consulting firm, released a startling report indicating that workers will be highly impacted by the fast-growing rise of artificial intelligence, robots and related technologies.

Banking and financial services employees, factory workers and office staff will seemingly face the loss of their jobsor need to find a way to reinvent themselves in this brave new world.

The term artificial intelligence is loosely used to describe the ability of a machine to mimic human behavior. AI includes well-known applications, such as Siri, GPS, Spotify, self-driving vehicles and the larger-than-life robots made by Boston Robotics that perform incredible feats.

Craig Federighi, Apple's senior vice president of Software Engineering, speaks about Siri during an ... [+] announcement of new products at the Apple Worldwide Developers Conference Monday, June 4, 2018, in San Jose, Calif. (AP Photo/Marcio Jose Sanchez)

While many reports tout the benefits of AI, there are many risks and unintended consequences, including the likelihood of replacing millions of human workers and unethical uses of the technology.For instance, China is using facial recognition to closely scrutinize its citizens who could be punished for certain transgressions. The country has been accused of using facial recognition to profile Muslims in its Xinjiang region. Recently, there were privacy concerns raised about FaceApp, the Russian-backed, face-ageing application.

Bloomberg reports that more than 120 million workers globally will need retraining in the next three years due to artificial intelligences impact on jobs, according to an IBM survey. The amount of individuals who will be impacted is immense. The worlds most advanced cities arent ready for the disruptions of artificial intelligence claims Oliver Wyman, a management consulting firm. It is believed that over 50 million Chinese workers may require retraining as a result of AI-related deployment. The U.S. will be required to retool 11.5 million people in America with skills needed to survive in the workforce. Millions of workers in Brazil, Japan and Germany will need assistance with the changes wrought by AI, robotics and related technology.

While the study claims that employees can learn new skills, its logic is suspect to the realities. In an effort to save billions of dollars in labor costs, Amazon warehouses have thousands of little, cute, orange robots made by Kiva, a robotics company acquired by Amazon for $775 million. The Kiva robots needs only 15 minutes to find, pick and package an order, whereas a human needs about 60-75 minutes to accomplish the same tasks. Ive spoken with workers at Amazon fulfillment centers and they claim that the pace and toll it takes on the human body is too much too handle. Humans simply cant compete against robots due to our inherent limitations.

A Kiva robot drive unit is seen, foreground, before it moves under a stack of merchandise pods. (AP ... [+] Photo/Brandon Bailey)

Mark Cuban, who became a billionaire with his tech company and is now the owner of the Dallas Mavericks, said in a CNBC interview, President Donald Trump should be more aware of tech advancements in machine learning and artificial intelligence and how that will impact Americas future. Cuban said, Im willing to bet that these companies building new plants...this will lead to fewer people being employed.

Computers, intelligent machine, and robots seem like the workforce of the future. And as more and more jobs are replaced by technology, people will have less work to do and ultimately will be sustained by payments from the government, predicts Elon Musk, the cofounder and CEO of Tesla.

AI, robotics and technology will displace millions of workers. It's already happening. AI is used by investment banks, law and accounting firms, hospitals and corporations to displace people involved with rote activities and lower-end, white-collar jobs. If your job can be replaced by AI, it willand youll need a new career.

This has already adversely impacted highly-paid Wall Street professionals including stock and bond traders. These are the people who used to work on the trading floors at investment bank and trade securities for their banks, clients and themselves. It was a very lucrative profession until algorithms, quant-trading software and programs disrupted the business and rendered their skills unnecessarycompared to the fast-acting technology.

There is no hiding from the robots. Well-trained and experienced doctors will be pushed aside by sophisticated robots that can perform delicate surgeries better and read x-rays more efficiently and accurately to detect cancerous cells that cant be readily seen by the human eye.

Shoppers use self checkout machines at a Kroger Co. supermarket in Louisville, Kentucky, ... [+] U.S.Photographer: Luke Sharrett/Bloomberg

Truck and cab drivers, cashiers, retail sales associates and people who work in manufacturing plants and factories have and will continue to be replaced by robotics and technology. Driverless vehicles, kiosks in fast food restaurants and self-help, quick-phone scans at stores will soon eliminate most minimum-wage and low-skilled jobs.

This trend will benefit coders and computer engineers. They will be the great beneficiaries of this emergencethat is until AI can learn to code as well asor better than the humans. To be fair, there are and will be new jobs created along with the disruption. Run a search on Google for Jobs and youll see thousands of listings for AI, robotics, coding, data analytics, coding, tech and related postings. The studies are somewhat optimistic and claim that the rise of the machines arent taking away all the jobs. Rather, they will change the descriptions of jobs and add more roles.

Andrew Yang, Democratic presidential hopeful, is one of the few candidates vocalizing concerns about the ascendancy of AI. Yangs official website offers the following:

Advances in automation and Artificial Intelligence (AI) hold the potential to bring about new levels of prosperity humans have never seen. They also hold the potential to disrupt our economies, ruin lives throughout several generations, and, if experts such as Stephen Hawking and Elon Musk are to be believed, destroy humanity.

Billionaire founder of Microsoft and outspoken philanthropist, Bill Gates, has called for a tax on robots due to the disruption that will occur, resulting in the loss of jobs and tax revenue. Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those things. If a robot comes in to do the same thing, youd think that wed tax the robot at a similar level, Gates said in an interview with Quartz. "You cross the threshold of job replacement of certain activities of all sorts at once," Gates added. "So, you know, warehouse work, driving, room cleanup, there's quite a few things that are meaningful job categories that, certainly in the next 20 years [will go away]."

Technology is advancing at a pace never before seen in human history. Even those developing it, dont fully understand how it works or what direction its taking. Recent advances in machine learning have shown that a computergiven certain directivescan learn tasks much faster than humans thought possible even a year ago. Were heading into this new world with no idea on how to regulate it, and a regulatory system thats designed for technology thats much less sophisticated than what were facing in the near future.

Technological innovation shouldnt be stopped, but it should be monitored and analyzed to make sure we dont move past a point of no return. This will require cooperation between the government and private industry to ensure that developing technologies can continue to improve our lives without destroying them.

View original post here:

Unbridled Adoption Of Artificial Intelligence May Result In Millions Of Job Losses And Require Massive Retraining For Those Impacted - Forbes

Posted in Ai

AI is now being used to shortlist job applicants in the UK let’s hope it’s not racist – The Next Web

Its oft-repeated that artificial intelligence will be a danger to our jobs. But perhaps in a not-so-surprising twist, AI is also being increasingly used by companies to hire candidates.

According to a report by The Telegraph, AI-based video interviewing software such as thatdeveloped by HireVue are being leveraged by UK companies for the first time to shortlist the best job applicants.

Unilever, the consumer goods giant, is among companies using AI technology to analyse the language, tone and facial expressions of candidates when they are asked a set of identical job questions which they film on their mobile phone or laptop, the report said.

HireVue, a Utah-based pre-employment assessment AI platform founded in 2004, employs machine learning to evaluate candidate performance in videos by training an AI system centered around 25,000 usable data points. The companys software is used by over 700 companies worldwide, including Intel, Honeywell, Singapore Airlines, and Oracle.

There are lots of subtle cues we subconsciously make sense of think facial expressions or intonation but these are missed when we zone out, the company notes on its website.

The videos record an applicants responses to preset interview questions, which are then analyzed by the software for intonation, body language, and other parameters, looking for matches against traits of previous successful candidates.

Its worth noting that Unilever experimented with HireVue in its recruitment efforts as early as 2017 in the US.

From recommending you what to binge watch over the weekend to booking the cheapest flight for your next vacation, AI and machine learning have quickly emerged as two of the most disruptive forces ever to hit the economy.

The technology is now doing more than ever for both good and bad. Its being deployed in health care; its helping artists synthesize death metal music. On the other hand, its enabling high-tech surveillance, and evenjudging your creditworthiness.

Theyre also scrutinizing your resume, transforming both job seeking and the workplace, and revamping the very means companies look for candidates, get the most out of employees, and retain top talent.

But just as algorithms steadily infiltrate different aspects of your day-to-day lives and make decisions on your behalf, they have also come progressively under scrutiny for being as biased as the humans they sometimes replace.

By letting a computer program make hiring decisions for a company, the prevailing notion is that the process can be made more efficient both by selecting the most qualified people from a deluge of applications and side-stepping human bias to identify top talent from a diverse pool of candidates.

HireVue, however, claims it has removed data points that led to bias in its AI models.

Yet as its widely established, AIs are only as good as the data theyretrained on.Bad data that contain implicit racial, gender, or ideological biases can also creep into the systems, resulting in a phenomenon called disparate impact, wherein some candidates may be unfairly rejected or excluded altogether because they dont fit a certain definition of fairness.

Regulating the use of AI-based tools, then, necessitates the need for algorithmic transparency, bias testing, and assessing them for risks associated with automated discrimination.

But most importantly, it calls for collaboration between engineers, domain experts, and social scientists. This is the key to understanding the trade-offs between different notions of fairness and to help us define which biases are desirable or unacceptable.

Read next: Facebook now blocks Pirate Bay links but you can still bypass the ban

Continued here:

AI is now being used to shortlist job applicants in the UK let's hope it's not racist - The Next Web

Posted in Ai

Human + Machine Collaboration: Work in the Age of AI – Interesting Engineering

In this age of Artificial Intelligence (AI), we are witnessing a transformation in the way we live, work, and do business. From robots that share our environment and smart homes to supply chains that think and act in real-time, forward-thinking companies are using AI to innovate and expand their business more rapidly than ever.

Indeed, this is a time of change and change happens fast. Those able to understand that the future includes living, working, co-existing, and collaborating with AI are set to succeed in the coming years. On the other hand, those who neglect the fact that business transformation in the digital age depends on human and machine collaboration will inevitably be left behind.

Humans and machines can complement each other resulting in increasing productivity. This collaboration could increase revenue by 38 percent by 2022, according to Accenture Research. At least 61 percent of business leaders agree that the intersection of human and machine collaboration is going to help them achieve their strategic priorities faster and more efficiently.

Human and machine collaboration is paramount for organizations. Having the right mindset for AI means being at ease with the concept of human+machine, leaving the mindset of human Vs. machine behind. Thanks to AI, factories are now requiring a little more humanity; and AI is boosting the value of engineers and manufacturers.

The emergence of AI is creating brand new roles and opportunities for humans up and down the value chain. From workers in the assembly line and maintenance specialists to robot engineers and operations managers, AI is regenerating the concept and meaning of work in an industrial setting.

According to Accenture's Paul Daugherty, Chief Technology and Innovation Officer, and H. James Wilson, Managing Director of Information Technology and Business Research, AI is transforming business processes in five ways:

Flexibility: A change from rigid manufacturing processes with automation done in the past by dumb robots to smart individualized production following real-time customer choices brings flexibility to businesses. This is particularly visible in the automotive manufacturing industry where customers can customize their vehicle at the dealership. They can choose everything from dashboard components to the seat leather --or vegan leather-- to tire valve caps. For instance, at Stuttgart's Mercedes-Benz assembly line there are no two vehicles that are the same.

Speed: Speed is super important in many industries, including finance. The detection of credit card fraud on the spot can guarantee a card holder that a transaction will not be approved if fraud was involved, saving time and headaches if this is detected too late. According to Daugherty and Wilson, HSBC Holdings developed an AI-based solution that uses improved speed and accuracy in fraud detection. The solution can monitor millions of transactions on a daily basis seeking subtle pattern that can possibly signal fraud. This type of solution is great for financial institutions. Yet, they need the human collaboration to be continually updated. Without the updates required, soon the algorithms would become useless for combating fraud. Data analysts and financial fraud experts must keep an eye on the software at all times to assure the AI solution is at least one step ahead of criminals.

Scale: In order to accelerate its recruiting evaluation to improve diversity, Unilever adopted an AI-based hiring system that assesses candidate's body language and personality traits. Using this solution, Unilever was able to broaden its recruiting scale; job applicants doubled to 30,000, and the average time for arriving to a hiring decision decreased to four weeks. The process used to take up to four months before the adoption of the AI system.

Decision Making: There is no secret to the fact that the best decision that people make are based on specific, tailored information received in vast amounts. Using machine learning and AI a huge amount of data can be quickly available at the fingertips of workers on the factory floor, or to service technicians solving problems out in the field. All data previously collected and analyzed brings invaluable information that helps humans solve problems much faster or even prevent such problems before they happen. Take the case of GE and its Predix application. The solution uses machine-learning algorithms to predict when a specific part in a specific machine might fail. Predix alerts workers to potential problems before they become serious. In many cases, GE could save millions of dollars thanks to this technology collaborating with fast human action.

Personalization: AI makes possible individual tailoring, on-demand brand experiences at great scale. Music streaming service Pandora, for instance, applies AI algorithms to generate personalized playlists based on preferences in songs, artists, and genres. AI can use data to personalize anything and everything delivering a more enjoyable user experience. AI brings marketing to a new level.

Of course, some roles will come to an end as it has happened in the history of humanity every time there has been a technological revolution. However, the changes toward human and machine collaboration require the creation of new roles and the recruiting of new talent; it is not just a matter of implementing AI technology. We also need to remember that there is no evolution without change.

Robotics and AI will replace some jobs liberating humans for other kinds of tasks, many that do not yet exist as many of today's positions and jobs did not exist a few decades ago. Since 2000, the United States has lost five million manufacturing jobs. However, Daugherty and Wilson think that things are not as clear cut as they might seem.

In the United States alone, there are going to be needed around 3.4 million more job openings covered in the manufacturing sector. One reason for this is the need to cover the Baby Boomers' retirement plans.

Re-skilling is now paramount and applies to everyone who wishes to remain relevant.Paul Daugherty recommends enterprises to help existing employees develop what he calls fusion skills.

In their book Human + Machine: Reimagining Work in the Age of AI, a must-read for business leaders looking for a practical guide on adopting AI into their organization, Paul Daugherty and H. James Wilson identify eight fusion skills for the workplace:

Rehumanizing time: People will have more time to dedicate toward more human activities, such as increasing interpersonal interactions and creativity.

Responsible normalizing: It is time to normalize the purpose and perception of human and machine interaction as it relates to individuals, businesses, and society as a whole.

Judgment integration: A machine may be uncertain about something or lack the necessary business or ethical context to make decisions. In such case, humans must be prepared to sense where, how, and when to step in and provide input.

Intelligent interrogation: Humans simply cant probe massively complex systems or predict interactions between complex layers of data on their own. It is imperative to have the ability to ask machines the right smart questions across multiple levels.

Bot-based empowerment: A variety of bots are available to help people be more productive and become better at their jobs. Using the power of AI agents can extend human's capabilities, reinvent business processes, and even boost a human's professional career.

Holistic (physical and mental) melding: In the age of human and machine fusion, holistic melding will become increasingly important. The full reimagination of business processes only becomes possible when humans create working mental models of how machines work and learn, and when machines capture user-behavior data to update their interactions.

Reciprocal apprenticing: In the past, technological education has gone in one direction: People have learned how to use machines. But with AI, machines are learning from humans, and humans, in turn, learn again from machines. In the future, humans will perform tasks alongside AI agents to learn new skills, and will receive on-the-job training to work well within AI-enhanced processes.

Relentless reimagining: This hybrid skill is the ability to reimagine how things currently areand to keep reimagining how AI can transform and improve work, organizational processes, business models, and even entire industries.

In Human + Machine, the authors propose a continuous circle of learning, an exchange of knowledge between humans and machines. Humans can work better and more efficiently with the help of AI. According to the authors, in the long term, companies will start rethinking their business processes, and as they do they will cover the needs for new humans in the new ways of doing business.

They believe that "before we rewrite the business processes, job descriptions, and business models, we need to answer these questions: What tasks do humans do best? And, what do machines do best?" The transfer of jobs is not simply one way. In many cases, AI is freeing up to creativity and human capital, letting people work more like humans and less like robots.

Giving these paramount questions and the concepts proposed by Daugherty and Wilson, giving them some thought might be crucial at the time of deciding what is the best strategy you should take as a business leader in your organization in order to change and adapt in the age of AI.

The authors highlight how embracing the new rules of AI can be beneficial at the time businesses are reimagining processes with a focus on an exchange of knowledge between humans and machines.

Related Articles:

Read more from the original source:

Human + Machine Collaboration: Work in the Age of AI - Interesting Engineering

Posted in Ai

From Our Foxhole: Empowering Tactical Leaders to Achieve Strategic AI Goals – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideasissued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the second part of the second question on AI expertise and skill sets for the national security workforce.

The race to harness artificial intelligence for military dominance is on and China might win. Whoever wins the AI race will secure critical technological advantages that allow them to shape global politics. The United States brings considerable strengths an unparalleled university system, a culture of innovation, and the only military that bestrides the globe to this contest. Its also constrained by shortcomings. Washingtons most serious problem isnt a shortage of ideas. Its a shortage of talent. And this shortage is large enough to threaten national security.

While the current administration has publicly recognized the need to invest in AI talent, a senior defense official admitted that finding, borrowing, begging and creating talent is a really big challenge for all of us. Institutions like the Joint Artificial Intelligence Center and university research labs are central to the Pentagons development strategy; however, challenges ranging from data collection to refining operational concepts place huge burdens on existing technical talent.

These demands could be reduced by integrating our junior military officers and enlisted personnel as partners in the development process. Hiring junior leaders as product managers would accelerate technology development and build new operational capabilities while integrating user feedback. This immediately expands the number of personnel contributing to AI development efforts and grooms the next generation of leaders for the challenges of multi-domain operations.

This year I worked with data scientists from the University of Southern California to test the thesis that military personnel could be integrated into the AI development pipeline as product managers. We did this through a forecasting tournament based on security issues on the Korean Peninsula. This tournament created an opportunity to simultaneously experiment with machine learning technologies and expand civilian-military collaboration. The results provided new behavioral insights for the University of Southern Californias research team and refined a method for expanding national security AI research using existing military personnel.

The Imitation Game Problem

Our AI experiments explored how to deal with the daily flood of data that is used to provide key decision-makers with predictive analysis and enhanced situational awareness. We chose this problem for our first round of experiments because the challenge is so common and is only getting worse, with 2.5 quintillion bytes of additional data each day. We termed this the Imitation Game problem, honoring the challenge that confronted British cryptographers cracking the Nazi enigma code, who began with more potential solutions each day then could be tried in multiple lifetimes.

Traditional methods for mitigating overwhelming data processing requirements, like assigning more personnel, cannot keep pace with this challenge. This is especially true given military recruitment shortages. The consequences of missing key information or processing it too late are stark, as evident in the findings of the 9/11 Commission Report.

Building the Team

The experiments to circumvent the imitation game problem began after I spoke with Fred Morstatter from the University of Southern Californias Synergistic Anticipation of Geopolitical Events lab. Unlike traditional machine learning models that use only quantitative data sets to train algorithms, USCs lab combines human judgement with quantitative models so that the strengths of both can optimize predictive value. This hybrid model addresses the militarys traditional aversion to replacing human decision-making with technology, captured in the saying that humans are more important than hardware.

Our pilot pursued improving commander decision-making through greater situational awareness using tools that combined human judgement and machine learning models. This approach can scale to a variety of defense challenges, though our initial experiment used public facing questions that were immediately relevant to our organization. Those questions became the basis for the Korean Security forecasting tournament we hosted with the University of Southern Californias lab in the spring and summer of 2019, which served as our first research sprint exploring the following:

What Did We Learn?

Solutions Require User-based Feedback Loops

When we separate technologists and military users, those who understand the problem cannot shape technology solutions, and those shaping technology solutions do not understand the problem. While there is a critical need to develop ties between researchers and operators, junior military personnel are generally removed from capability development efforts. This disconnect is largely due to the Armys preference for institutional approaches to capability development that favor large commands and senior leaders, discounting the potential contributions of junior leaders. This bias is evident in the lack of billets for junior officer and enlisted personnel in Army Futures Command, despite their preponderance in the force.

While our individual experiment is valuable, the real impact will come from scaling our experimental design across the military. That is because using junior leaders as product managers mitigates the disconnection challenge and creates immediate value to both parties. We found that bringing current operational problems to the academic team diversified research applications while generating capabilities with immediate military relevance. This method also increases the interactions between research organizations and military innovators in a way that other models cannot replicate, expanding the idea sourcing funnel and increasing the odds that experimentation will lead to decisive capabilities.

This approach mitigates the current shortage of uniform-wearing AI talent that is the source of frequent Pentagon complaint. Our experiment shows that intelligent junior leaders can contribute to multi-functional teams in a product manager role. Technology companies use product managers to maximize the outcome value of products; servicemembers in this role can maximize an experiments value and operational relevance. Military product managers achieve this by turning force generated requirements into defined capabilities, managing requirement backlog, and liaising between their commands and technology development teams.

Silicon Valley companies rely on non-technical product managers to complement highly specialized professionals, and adopting that practice allows currently unused military personnel to achieve similar impact. While our initial experiments demonstrated the feasibility of this approach with comparatively minimal training, a second step is to train servicemembers in basic tech innovation practices. Product management and data science training will allow servicemembers to effectively contribute to military product development and increase the capabilities of Americas future force. This training is immediately accessible using resources like data science boot camps or online courses, and could be readily expanded through existing institutional partnerships.

Bringing in non-technical contributors to the project was valuable. Over the course of the tournament, forecaster accuracy improved (a development that speaks to the ability to rapidly train intelligence analysts to use these tools) and the best forecasters had the highest degrees of interaction with the system, accelerating algorithm training. The result was a virtuous cycle where the growing number of human forecasts enhanced the models predictive value while increasing user familiarity. The result provided USC researchers greater insight into behavioral patterns and optimization strategies for using their technology to inform future development efforts.

The post-product manager talent surge could expand the use of academic partnership programs like Hacking 4 Defense (H4D), since servicemembers could serve as problem sponsors for cross-functional academic teams. These teams could conduct problem curation and prototype development for AI initiatives and access senior mentors from the technology community through organizations like the Defense Entrepreneurs Forum. These research teams could report insights and progress to service-level AI organizations, simultaneously improving partnerships across the civilian-military AI ecosystem, training servicemembers in critical innovation skills, and closing capability gaps. The knowledge generated by these cross-functional academic teams could then be used to guide acquisitions efforts, including Small Business Innovation Research grants, forming an agile AI integration ecosystem.

The U.S. military could implement this strategy by launching programs through the Joint Artificial Intelligence Center or service-specific AI centers like the Army Artificial Intelligence Task Force that train innovative thinkers as product managers and junior data scientists. These leaders could then return to their host commands and sponsor operational problems through experimental pilots during initial concept development. After the efforts gain momentum, servicemembers could be mentored by experienced product managers and data scientists from startup partners to mature these capabilities. This would immediately create a Department of Defense talent development pipeline to meet the present shortage, while expanding the vibrancy of Americas AI ecosystem to regain its comparative advantage.

AI is Only as Useful as the Questions You Ask and the Data You Offer

AI demands specificity in asking questions, determining resolution criteria, and selecting training data sets. While AI is praised for its power and precision, those traits come with costs that must be included in experimental design.

These are acute challenges when AI confronts security arena complexity, as both problems and solutions are often ambiguous. We encountered this challenge as we iterated through crafting tournament questions to sufficient granularity to drive algorithm development. The danger of focusing too much on asking the questions the right way is failing to ask the right questions in the first place. Further, opportunity cost is incurred for every model launch, since pivoting to a second batch of questions often requires generating new data sets to train algorithms.

After crafting the right questions, our next hurdle was sourcing data sets for model training. This is difficult for security problems due to the limited number of existing data sets and event infrequency when trying to create one. For example, individual missile launches offer less robust data sets than commodity market data on sugar prices over the same period. A powerful strategy for overcoming this hurdle and developing more robust security algorithms is to generate proxy tabular data sets from currently underleveraged and unstructured data sources, i.e., dark data. Learning to deconstruct your operational environment into data sets allows for more rapid subsequent adaptation to environmental changes.

Our pilot accepted risk on optimizing questions and data sets by focusing on high value topics; even if more timely inquiries arose later, our effort was justified. Despite this hedge, we were confronted with surprises during the pilot. The DMZ visit between Chairman Kim, President Moon, and President Trump resolved several questions in spirit on June 30, but not according to the definition we wrote in April.

The pilot also allowed SAGE researchers to test how forecasters reason over different time horizons by deploying two identical sets of the approved questions, one with a resolution date of April 25, 2019 and the other using July 25, 2019. Preliminary findings indicate that the forecasters who engaged in both tended to have more conservative forecasts initially for the longer horizon questions, and more aggressive forecasts for the shorter ones. These observed predictive trends offer insights into underlying cognitive properties.

The Goal for AI Capabilities is Not More, but Better

Our goal in this pilot was to create valuable insights that could be integrated into operational rhythms of units across the Army. While the research crowd understands the value of AI systems, introducing this value to operational units required minimizing barriers to entry and reproducibility.

The goal of self-evident value creation led to aligning our research efforts to existing military tools designed to improve commander decision-making and awareness called priority information requirements. This critical information and signaling criteria allow leaders to understand when to use certain courses of action, allowing them to become proactive regarding decision-making. The benefit of building the AI experiments using priority information requirements was to ensure our model could scale since all Army units use these tools. This ubiquitous framework provides a natural focal point for incorporating and training other algorithms.

The next challenge is avoiding overloading existing digital infrastructure once tactical leaders understand the value of integrating these systems. An all too common, and toxic, paradigm in capability development is limitlessly expanding the tools assigned to commanders on the assumption that more is better. The result of this approach is adding yet another layer of technology on top of arcane digital infrastructure without considering existing systems. Users become overwhelmed by the number of systems they are expected to simultaneously manage, essentially nullifying the impact of new military technology.

Military product management is uniquely suited to prevent saturation of user cognitive bandwidth and optimize the value created while introducing new technologies. The goal should not be simply adding additional systems, but eliminating waste and simplifying tasks to increase organizational speed and agility. AI research efforts approached from this perspective benefit military leaders by creating data ecosystems that help units efficiently navigate complex operational environments.

The Next Iteration

Preserving an American-led international system requires achieving the technological superiority necessary for military dominance. A critical step in reaching that objective is closing the talent gap confronting Americas defense ecosystem by pivoting current strategy to include junior leaders. This pivot should integrate servicemembers as product managers and junior data scientists on cross-functional teams with academic institutions and tech sector volunteers, simultaneously mitigating manpower shortages and training our servicemembers to leverage these tools.

The United States has a history of making up for lost ground by combining the power of our private and public sector from surpassing the Nazis with nuclear weapons to defeating the Soviets in the space race. Its time to align tactical action with strategic priorities to ensure America wins the AI race. The United States can start today by bringing its tactical leaders into the fight for AI dominance.

Capt. James Jay Longis an Army infantry officer, National Security Innovation Network (NSIN) Startup Innovation Fellow, and experienced national security innovator. He is currently transitioning from active duty and last served as an operations officer with United Nations Command Security BattalionJoint Security Area.

Image: U.S. Army Graphic

Read this article:

From Our Foxhole: Empowering Tactical Leaders to Achieve Strategic AI Goals - War on the Rocks

Posted in Ai

Microsoft Says AI-Powered Windows Updates Have Reduced Crashes – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use.

Microsoft has invested heavily in AI and machine learning, but you wouldnt know it from how little attention it gets compared with Google. Microsoft is using its machine learning technology to address something all long-term Windows users have experienced: faulty updates. Microsoft says that AI can help identify systems that will play nicely with updates, allowing the company to roll new versions out more quickly with fewer crashes.

It seems like we cant get a single Windows update without hearing some stories of how it completely broke one type of system or another. You have to feel for Microsoft a little the Windows ecosystem is maddeningly complex with uncountable hardware variants. Microsoft started using AI to evaluate computers with Windows 10, version 1803 (the April 2018 Update). It measured six PC health stats, assessed update outcomes, and loaded all the data into a machine learning algorithm. This tells Microsoft which computers are least likely to encounter problems with future updates.

By starting with the computers with the best update compatibility, Microsoft can push new features to most users in short order. With most OS rollouts, things move very slowly at first while companies remain vigilant for problems. PCs determined to have likely issues by the AI will get pushed down the update queue while Microsoft zeros in on the bugs.

The ML models seem effective, even if Microsoft didnt bother to label the Y-axis.

The first AI-powered deployment was a success, with adoption rates higher than all previous Windows 10 updates. Microsoft expanded its original six PC metrics to a whopping 35 as of the Windows 1903 rollout (May 2019). The company claims this makes update targeting even more accurate. This does not guarantee perfect updates, though. Microsofts blog post glosses over 1809 update from late 2018. That rollout used AI technology, but you might recall the widespread file deletion bug that caused Microsoft to pause the release. AI might help determine compatibility, but it cant account for unknown bugs like that.

Still, Microsoft is happy with the results from its machine learning deployments. According to the new blog post, systems chosen for updates by the algorithm have fewer than half as many system uninstalls, half as many kernel-mode crashes, and one-fifth as many post-update driver conflicts. Hopefully, you can look forward to fewer Windows update issues going forward, and youll have AI to thank.

Now read:

Read the original here:

Microsoft Says AI-Powered Windows Updates Have Reduced Crashes - ExtremeTech

Posted in Ai

New Microsoft Report Claims U.K. Is Behind The Rest Of The World On AI – Forbes

Organizations currently using AI outperform those that don't by 11.5%. Despite this, only 24% have ... [+] an AI strategy in place.

Anew report, unveiled October 1 by Microsoft UK, claims thatBritish organizationsrisk being overtaken by their global counterparts unless the use of artificial intelligence (AI) technology is accelerated.

The report, conducted by YouGov andin partnership with Goldsmiths, University of London, focused on more than 1,000 business leaders and 4,000 employees, and includes interviews with leading industry experts fromorganizationssuch as M&S, NatWest, Renault F1Team, Lloyds Banking Group and the NHS. Its findings demonstratethat organizations currently using AI outperform those that don't by 11.5% but despite this, only 24%have an AI strategy in place.

The U.K. is also at risk of falling further behind the likes of the U.S. and China if attitudes to AI remain the same given that74% of the nations business leaders doubt the U.K. even has the socio-economic structures in place to lead in AI on the global stage.

Cindy Rose, CEO of Microsoft UK had a clear message for organizations that might be slow on the uptake of AI:

U.K. businesses and public sector organisations that forgo or delay implementing AI solutions risk missing the boat on driving down costs, increasing their competitive advantage and empowering their workers. Given this moment, where both U.K. leadership and competitiveness on the global stage is more vital than ever, there is no doubt that fully embracing AI-led digital transformation is a critical success factor for U.K. businesses, government and society.

AI In Healthcare

Microsofts report found that U.K. healthcare is actually at the forefront of AI innovation, with almost half (46%) of organizations reporting that they use AI. Last year saw an increase of 8%, with the biggest leaps made inresearch, robotic process automation (RPA) and other automation, as well as voice recognition and touchscreen technology. That said, primarily, AI is still restricted to small,localized pilotprojects, rather than big contracts.

A robotic arm for brain surgery is seen at the 2019 World Robot Conference in Beijing on August 20, ... [+] 2019. In healthcare, the biggest AI leaps have been made in research, robotic process automation (RPA) and other automation, as well as voice recognition and touchscreen technology.

Progressing thisexperimentation to full implementation will certainly require a culture-shift and the report identified some interesting challenges, namely:

Clearly, there are two main areas of significant improvement that organizations must focus on to increase uptake and value of AI: communication between staff and universal understanding of AI amongst theworkforce.

I spoke to Clare Barclay, chief operating officer at Microsoft UK about how Microsoft intends to address these conclusions and plans are already underway for an education program called the AI Business School:

We have developed the AI Business School, tailored for healthcare, to train healthcare professionals in a non-confrontational way. Were thinking about how we truly help leaders understand the technology, the culture, the strategy and the ethical implications.Someprogramswill be tailored for a specific customer, like a hospital, and outside of that we will be running a set of programmes in-store and at other locations across the U.K.Leaders willhear from other healthcare professionals, startups, technology providers etc. so they can understand and have meaningful conversations about AI. We've also committed to training 30,000 front-line staff."

Microsoft have committed to training 30,000 front-line staff and leaders at its AI Business School

Microsofts healthcare industry lead, Stephen Docherty, isfocused onensuringpractical benefits arise from this report. He was previously Chief InformationOfficer (CIO) at South London and Maudsley NHS Trust (SLAM) and knows the issues at thecoalfaceonly too well, as well as the benefits AI could bring if implemented correctly.

On the AI Business School, he says it will be important in enabling all healthcareworkersto have conversations that lead to change;talking in the language of value propositions, culture, data and ethics."Being from the front-line myself, I can see huge value in this if executed well.

Overall, for Docherty, the report was positive as it showed that people are beginning to use AI, but hes now keen to see the advantages at scale:

The biggest thing for me is around clinician time. When I was a CIO, I sawpeople having to feedcompliance information into multiple systems, using multiple logins, getting frustrated and burning out. Eric Topol talked about giving people inhealthcare the gift of time and AI can really make people's daily lives much better. But tomakethe most impact, everyone needs to be brought up to speed on AI; you need a clear digitalstrategy and then afocus onadoption.

Barclay and Docherty both describe howEast Suffolk and North Essex NHS Foundation Truststarted using AI to reduce their admin burden. There was asense of fear among the workforce that the technology would displace jobs, however it took a significant amount of work away from their healthcare professionals,saving 4,500 hours for staff in the past 12 months. Importantly, this meant eyes off paperwork and eyes back onto patients for that time. Barclays favorite part of the story is that the AI system is now embraced as part of the team and has even beenhumanizedwith a name.Quirky, perhaps, but thisdoes point to importance of creating the right culture whilst implementing technology.

Dr Yeshwanth Pulijala is the founder of Scalpel, an emerging healthtech startup in the U.K. that uses A.I. (computer vision and data analytics) to reduce preventable surgical errors and improve operating room efficiency. He agrees with the report, has first-hand experience of the disparity in knowledge and experience of AI and has a lesson for AI companies in the healthcare space:

In my experience, the best way to achieve adoption of AI technology is to introduce frontline clinicians, patients and policymakers in the very early stages of product development. Ive only found a few hospitals in the U.K. so far that really understand the potential of AI at its core. They are our torchbearers and were piloting at six such hospitals to demonstrate improved levels of patient safety." On the AI Business School Pulijala says it would be a great way to scale this model.

To be effective, reports need to lead to action. Ive seen, read and even written recommendations that go unnoticed and are doing little more than collecting dust on shelves. Its now up to the relevant teams to deliver and its refreshing to hear Dochertys front-line, execution-focussed attitude at Microsoft to see themthrough to action:

Weve talked about it a lot. It's time to get on with it now."

Follow this link:

New Microsoft Report Claims U.K. Is Behind The Rest Of The World On AI - Forbes

Posted in Ai

An AI startup tries to take better pictures of the heart – STAT

Lets assume you are not an expert highly trained in medical imaging. And lets assume you were invited one day to try out a new technology for heart ultrasounds diagnostic tools that are notoriously difficult to use because of the chest wall and because some shots must be made while the heart is in motion.

Could you do it?

Maybe. When I was given the shot on a recent day, I was able to take the ultrasound in a matter of minutes with the help of software, developed by a San Francisco-based startup called Caption Health. The software told me how to hold the ultrasound probe against the ribs of a model who had been hired for the purpose of my visit and knew on its own when to snap the image. It was a little like having Richard Avedons knowledge of photography uploaded into the guts of my iPhone camera.


You can see the image I took of the parasternal long axis view of the heart pumping at the top of this page.

If the technology holds up, Caption, until recently called Bay Labs, could succeed in solving the problem of making heart sonograms easier to obtain. Its already impressed some in the life sciences. Among them is health care executive Andy Page, who spent four years as Anne Wojcickis right-hand man at 23andMe and a year as the president and chief financial officer at digital health startup Livongo. He was introduced to Caption Health last fall by one of its investors, the billionaire Vinod Khosla. He has chosen to become its chief executive.

I was interested in how AI could impact health care, Page told STAT. Knowing it was a trend that was coming, my thought was that to really impact health care, the AI implementation would have to be straightforward, understandable, practical, trusted. And thats exactly what the company was doing.

The use of AI in ultrasound is becoming a hot area. Butterfly Network, which launched a handheld ultrasound device that is much cheaper than competitors early this year, is also working on AI. Ultromics, based in London, is also working on using AI in ultrasound.

I had used Butterfys technology two years ago to take images of my carotid artery. The experience of using Captions was similar in a lot of ways, but it was obvious that the images I captured with the latter technology were harder-to-get shots.

The word revolutionary is probably overused a lot these days with a lot of the tech things we have coming out, but this has the potential to really change how were treating our patients in the not-distant future, said Dr. Patrick McCarthy, the executive director of the Northwestern Bluhm Cardiovascular Institute, who was primary investigator of a study of Captions AI but said he has no financial relationship with the company. McCarthy said he thinks the AI could democratize heart ultrasound by increasing the number of health care professionals who can give the test, meaning that more patients who should have it will.

Caption was founded in 2013 by Charles Cadieu, its president, and Kilian Koepsell, its chief technology officer. Cadieu spent his early adult life moving between the Massachusetts Institute of Technology, from which he has a masters degree in engineering, and the University of California, Berkeley, where he received a Ph.D. in neuroscience. Im kind of the planning/thinker/architect and Kilian is the tuned-in laser beam to get things done, said Cadieu.

Cadieu and Koepsell both wound up on the founding team at IQ Engines, a company that was involved in using deep learning to identify images. After it was sold to Yahoo in 2013, the pair started working on the idea that deep learning was ready to be applied to medicine. I was always inspired by applying science to medicine, said Koepsell, who grew up in a family of doctors. According to family legend, he said, his great-grandfather was present in the lecture where the use of X-rays was demonstrated for the first time.

Ultrasound is particularly suited not just for having AI interpret ultrasound images, but also for tackling a more immediate challenge: getting the images in the first place.

If you dont do these every day, you get hesitant about, What am I looking at? said Dr. Mark Schneider, chair of the department of anesthesiology and director of applied innovation at Christiana Care in Wilmington, Del. And then you get hesitant to use it.

Right now, the image quality that gets taken of patients is all over the place, said Dr. Arun Nagdev, director of point-of-care ultrasound at Highland General Hospital in Oakland, Calif. The ability to obtain that image is crucial, Nagdev said. Once novice users can use the technology, he foresees a hockey stick growth in the ultrasound.

Page said he thinks of the technology under development as a co-pilot that can assist doctors who have trouble getting particular scans, as well as those who have not used ultrasound much before a use that could expand to hospitalists, who focus on hospitalized patients, anesthesiologists, and nurses.

Caption Health provided me with unpublished data from a study in which 8 nurses with no previous experience in cardiac ultrasound performed four different types of scans on 240 patients.

For assessing patients left ventricular size and function, as well as assessment of pericardial effusion, or fluid around the heart, the AI took the same number of usable images. For each, 240 scans were performed, and 237, or 98.8%, were of sufficient quality, according to a panel of five cardiologists. For images of the right ventricle, which is harder to see, the results were a bit worse: 222 images, or 92.5% of them, were of adequate quality. Eric Topol, the director and founder of the Scripps Research Translational Institute, commented that this was still a small number of samples for AI work; Caption Health said it respectfully disagrees because the study was prospective. The goal of the study was to show the test was 80% accurate.

Caption Health will need to partner with device makers in order to bring its device to market; it does not make ultrasound equipment. It is currently partnered with one ultrasound manufacturer, Terason. Caption has received breakthrough device status from the Food and Drug Administration, which could expedite its regulatory review. Caption said it has raised $18 million to date; a recent valuation is not available.

Page is nothing if not confident. Nothing else exists in the market of this nature, he said.

View original post here:

An AI startup tries to take better pictures of the heart - STAT

Posted in Ai

Beyond Limits Brings Space-Tested AI to Earth’s Harshest Terrains – PCMag

(Photo by Greg Rakozy on Unsplash)

When sending rovers and robots to traverse rough terrain, from the surface of Mars to the bottom of the ocean, communicating with these devices from afar can be a dicey affair. One false move and million-dollar gadgets turn into bricks and valuable data is toast.

Glendale, California-based Beyond Limits wants to make sure that never happens. Its software adds "human-like reasoning" to technology that solves complex problems in high-risk environments. As CEO AJ Abdallat explains, these "bio-inspired algorithms...imitate the functions of a human brain."

After one year of exclusivity with BP, which also invested $20 million in the company, Beyond Limits has signed a $25 million contract with Xcell to build the world's first power plant guided by cognitive AI in West Africa. PCMag spoke with Abdallat ahead of his keynote at the at the Phi Science Institute AI Summit in Jordan. Here are edited and condensed excerpts of our conversation.

PCMag: Beyond Limits is deploying many technologies developed by your co-founder and CTO, Dr. Mark James, a research scientist who worked at NASA-JPL for over 20 years. How did you two meet?AJ Abdallat: We met in '98 at Caltech, which manages the [NASA] Jet Propulsion Laboratory. I was working with the Caltech president and the Technology Transfer Program to commercialize technologies that were developed for the space program and make them available on Earth.

Give us the backstory on Dr. James' work at JPL.Mark designed and wrote NASA's first AI system, the Spacecraft Health Automated Reasoning Program [SHARP], which was used for the Voyager mission. It monitored all the system and performance data from the Deep Space Network. Since Neptune is 2.7 billion miles away from Earth, it takes three huge DSN antenna arrays in North America, Spain, and Australia to communicate with the spacecraft.

As Voyager 2 headed toward Neptune, Mark's AI system predicted the imminent failure of a key communications transponder (at mission control) that would have caused a catastrophic break in comms. The spacecraft could have burned up in the Neptune atmosphere and the mission would have terminated. Instead, engineers were able to replace the transponder just in time and the mission continues to this day, more than 10.3 billion miles from Earth.

An artist's concept of NASA's Voyager spacecraft. (Credit: NASA)

Tell us about the Autonomous AI for Mars Opportunity Rover.Solar energy is the life-blood of spacecraft like the Mars Opportunity Rover, but the conditions up there are harsh, unknown, and unpredictable. So, the management of energy is mission-critical. A key component of Beyond Limits AI solutions is a technology called the Hypothetical Scenario Generator (HSG), a revolutionary way of reasoning in the presence of missing and misleading information developed by JPL for NASA.

This advanced software system analyzes data inputs, generates hypothetical situations, and reasons optimal behaviors and results. Early Mars missions suffered from a lack of information about conditions on the surface. Human expertise and limited geographical data were loaded into the HSG. But when bad Mars weather threatened the mission, HSG did not have access to historical weather data. There was no historical data, as it was a first-of-its-kind mission.

But HSG is capable of learning autonomously.Exactly. When the Rover was having trouble charging its batteries, it had detected clouds and wind and associated clouds with particulates, which no one had ever encountered on Mars. HSG reasoned that a cloud might deposit particulates on the solar panels and conducted an autonomous experiment by rotating its solar wings upside down to shake off the dust. It worked, and the Rover's health was assured for years to come. JPL scientists on Earth noticed that HSG had taught itself to correlate hypotheses that had been proven to be correct with sensor data from the Rover.

(Photo by NASA on Unsplash)

That's amazing.HSG learned on its own to optimize behavior of the Rover to conserve power, deploy solar cells safely, and keep the system charged, even during harsh Mars sand and wind storms. HSG had induced new weather models from scratch. The results kept the mission going far beyond its expected lifespan.

How do Beyond Limits' AI solutions differentiate from others in the market today?We specialize in solving complex problems in high-risk environments. Unlike the conventional machine learning, neural networks, and deep learning techniques that are gaining traction today, we take a different approach by adding a symbolic reasoning layer to produce cognitive, human-like reasoning. Beyond Limits has deep roots in what we call bio-inspired algorithms that imitate the functions of a human brain. It allows us to do things like deductive, inductive, and abductive human-like reasoning.

Your AI isn't a black box, then.No. Unlike conventional AI approaches, Beyond Limits AI systems are explainable. The results our systems produce have transparent and detailed audit trails, which are interpretable by both humans and machines. This sets our systems apart from 'black box' conventional AI systems that cannot explain how they arrived at a recommendation. Our systems provide an audit trail that explains the rationale and evidence for the answer in natural language. In high-value industries, establishing trust is important and you need explainability to do this.

Do you provide updates on the core technology back to JPL as part of your licensing agreement?Yes. We enhance some of the IP blocks we've licensed from Caltech/JPL and contribute them back to the core. Our people frequently work with Caltech/JPL people and I'm on the board of advisors for Caltech's CAST lab. We do not supply software to NASA currently. Space is our origin, but our mission today is to solve problems here on Earth.

(Photo by WORKSITE Ltd. on Unsplash)

In fact, your original client and investor was BP, which called on your expertise after the Deepwater Horizon disaster. How did you beat out GE, IBM, and other incumbents in that space?GE and IBM are really good in their fields of conventional AI, but what BP was looking for was a cognitive AI approach. Conventional AI is a great way to analyze a lot of data and tell you the what, but you need cognitive AI to explain the answer and tell you the why. Cognitive AI is needed for true explainability and conventional black box approaches simply cannot explain their answers, which means the engineers cannot fully trust the system or apply it to high-value assets. As AI systems are rolled out at BP, they will increase efficiency, generate revenue, and diagnose problems and predict remedies. All of which could help prevent disasters like Deepwater Horizon from happening again.

BP's exclusivity with Beyond Limits recently ended. What's next for you?The natural next step for us was to expand into natural resources and power management. We recently announced a $25 million project with Xcell for the world's first cognitive AI power plant. We are also working with a car company to monitor driver health while in the car.

Process manufacturing is also going to be a focus for us. These are very complex factories that are running 24/7 365 days of the year. Cognitive AI can make these factories run more efficiently with less risk and downtime while maximizing profits. One of the big highlights is that we've proved that our cognitive approach works for a very tough commercial audience. We are working in high-value, high-risk industries.

Finally, I have to asksticking with the space origin storyhave you built a [benign] HAL 9000?We are not comfortable with the sci-fi cliches about deadly robots, killer cyborgs, and so on. Artificial General Intelligence [AGI], as a concept, is one that's as smart as a human. This is science fiction. The compute required for such a super-powered AI system would fill a football arena and require a huge power plant. Our systems accommodate humans in the loop. The role of our AI systems at Beyond Limits is as an advisor to humans to help with decision-making. Additionally, in many cases, our technology can be embedded in the sensors themselves.

You've built an AI that is more of an IA [Intelligent Augmentation] to us bio-beings, then?Yes, humans make the final decisions with our systems.

AJ Abdallat will be giving the keynote at the Phi Science Institute AI Summit in Jordan on Oct. 29.

Read this article:

Beyond Limits Brings Space-Tested AI to Earth's Harshest Terrains - PCMag

Posted in Ai

Cities aren’t even close to being ready for the AI revolution – Axios

Globally, no city is even close to being prepared for the challenges brought by AI and automation. Of those ranking highest in terms of readiness, nearly 70% are outside the U.S., according to a report by Oliver Wyman.

Why it matters: Cities are ground zero for the 4th industrial revolution. 68% of the world's population will live in cities by 2050, per UN estimates. During the same period, AI is expected to upend most aspects of how those people live and work.

The big picture: Many cities are focused on leveraging technology to improve their own economies such as becoming more efficient and sustainable "smart cities" or attracting companies to compete with Silicon Valley.

What they found: No city or continent has a significant advantage when it comes to AI readiness, but some have parts of the recipe.

By the numbers: Here are the survey stats that stood out.

Cities to watch:

Reality check: Cities can't deal with the repercussions of AI on their own. National and regional governments will also have to step in with policy strategies in collaboration with businesses.

Go deeper: See how your city measures up

Read more from the original source:

Cities aren't even close to being ready for the AI revolution - Axios

Posted in Ai

Every business will rely on AI in five years and most people are worried theyre being left behind – Evening Standard

Microsoft believes that every business will be an AIbusiness in the next five years but there are concerns that people dont fully understand the technology and will be left behind in the AI revolution.

Ahead of Future Decoded, the tech giants annual conference at the ExCel Centre in London, it released a new report, named Accelerating Competitive Advantage with AI, covering how businesses across the UK are using the technology.

The report shows that there is more awareness and adoption of AI overall amongbusinesses, with 56 per cent of businesses adopting AI. However, less than a quarter of these organisations (24 per cent) have an AI strategy and 96 per cent of employees surveyed reporting that their bosses are adding AI without consulting them on the technology. This is fuelling anxiety around the technology, as well as concerns over job security.

Based on the progress were seeing, we believe that every company will be an AI company in five years, Microsofts UK COO Clare Barclay told the Standard. As organisations start to use or think about using [AI], we want to encourage more open dialogue on this topic.

Meet the AI stylists who will help you get dressed in the morning

Open communication is absolutely critical.

To encourage discussions and education around AI, Microsoft is launching a new AI Business School in the UK. It has currently been running as a pilot for the past 12 months, and focuses on explaining the technology of AI, how it can inform strategies and the culture around it. For instance, one aspect of the programme will focus on the ethical decisions leaders have to make when it comes to AI, including how to construct and implement an ethical AI framework.

But its not just for business leaders; whileit will function as a physical space, there are also plans to implement online workshops for anyone to access. We want to make sure were driving broad skills development. Weve made a commitment to train 30,000 public sector employees and 500,000 UK citizens as part of that, adds Barclay.

Microsoft isnt the only tech company examining the role of AI in the UK. Recently,Samsung launched its new FAIR Future Initiative, which aims to educate the public on AI in order to involve everyone in the deployment of the tech. According to research carried out by Samsung, which surveyed the views of 5,250 people in the UK and Ireland, 51 per cent feel AI will have a positive impact on society as a whole, however around 90 per cent of people feel it is too complex to understand.

Thats a significant challenge, Teg Dosanjh, director of Connected Living at Samsung UK and Ireland, told the Standard. We want to tackle that by having an online hub, so people can understand what AI is today, the terminology and demystifying it, and raise awareness around that.

As well as the FAIR Future online hub, Samsung is taking its FAIR Future work on the road to encourage people across the UK to get hands-on with the tech. Its first stop will be at the Norwich Science Festival later this month.

Google.org's Jacquelline Fuller on AI and tech for good

Samsung has been in the AI space for a while, particularly when it comes to smart devices and appliances in the home with its Connected Things platform. So why is now the time right for it to investigate attitudes to AI in real life? Dosanjh says it comes from the way the technology is accelerating.

Were not talking about just algorithms anymore typically consistent calculations but youve got these neural networks. And the technology and the capabilities of neural networks and machine learning have developed significantly over the last five years.

He believes the onus is on tech companiesto explain whats going on in the industry. The survey found that most people gained knowledge on AI from the media, word of mouth and fiction leaving governments and tech companies languishing behind. We havent done a very good job of bringing people on the journey with us on AI. So we have to start at some point and involve everyone in that discussion.

More of our lives and society are going to be impacted by AI and weve got to be very conscious of how we capitalise and bring those opportunities to life.

See the original post here:

Every business will rely on AI in five years and most people are worried theyre being left behind - Evening Standard

Posted in Ai

How Europes AI ecosystem could catch up with China and the U.S. – VentureBeat

From Alibaba and Baidu to Google, Facebook, and Microsoft, China and the United States produced virtually every one of the top consumer AI companies in the world today. That leaves Europe trailing behind the U.S. and China, even though Europe still has the largest community of cited AI researchers.

Startup founders, analysts, and organizations seeking to bring ecosystems together for collective action pondered how the European AI ecosystem can catch up with China and the United States at TechBBQ, a gathering of hundreds of Nordic tech startups held recently in Copenhagen.

Presenters argued that Europe has to turn things around not just for the good of the European economy, but also to provide the world with an alternative to the corporate-driven approach of the U.S. and the state-driven approach of China.

If you look today at some of the spending, which is devoted to artificial intelligence and frontier technologies, were pretty much squeezed between the U.S. and now China, and China is leading, said Jacques Bughin, a senior advisor at the McKinsey Global Institute.

Bughin and others at McKinsey in February coauthored the Notes from the AI frontier report that evaluates the European AI ecosystem and identifies areas where Europe can begin making strides.

Europe edges out the U.S. in total number of software developers (5.7 million to 4.4 million), and venture capital spending in Europe continues to rise to historically high levels. Even so, the U.S. and China beat Europe in venture capital spending, startup growth, and R&D spending. The U.S. also outpaces Europe in AI, big data, and quantum computing patents.

A Center for Data Innovation study released last month also concluded that the U.S. is in the lead, followed by China, with Europe lagging behind.

Multiple surveys of business executives have found that businesses around the world are struggling to scale the use of AI, but European firms trail major U.S. companies in this metric too, with the exception of smart robotics companies.

This trend could be in part due to lower levels of data digitization, Bughin said.

About 3-4% of businesses surveyed by McKinsey were found to be using AI at scale. The majority of those are digital native companies, he said, but 38% of major companies in the U.S. are digital natives compared to 24% in Europe.

In Europe, you have two problems: Youve got a startup problem, but you also have an incumbency problem, where most of the companies [are] actually lagging in terms of knowledge of technologies and division of these technologies compared to the U.S., Bughin said.

Then theres McKinseys AI Readiness Index, which combines eight factors like human skills, investment capacity, number of AI startups per capita, and infrastructure thought to influence a countrys ability to build and support an AI industry and implement the technology in existing industries. In this area, the top-ranking countries are the U.S. and select European countries, such as Ireland, Sweden, Finland, and the U.K.

China excels in categories like ICT connectedness, investment capacity, and AI startups, but the countrys lower preparedness in categories like digital readiness bumps it down to a rank of 7th, between Estonia and Holland.

Countries in southern and eastern Europe generally rated lower in each of the eight AI enabler categories than those in western or northern Europe.

Countries with vibrant, innovative AI startups likely to scale and go international typically have local venture capitalist funding, as well as state investments to build a strong infrastructure that supports businesses and allows the formation of a market.

For those lagging behind, turning things around is essential, Bughin said, because AI will be a major driver of GDP growth in the decades ahead.

If laggard European countries were to close the current readiness gap with the United States, Europes GDP growth could accelerate by another 0.5 point[s] a year, or an extra 900 billion by 2030, the report reads.

Bughin has a number of ideas for how Europe can transform into a leader in AI. To grow the AI ecosystem in Europe, he suggests, the investment will be about gaining a technical understanding of how machine intelligence works.

AI is more than technology. As I say, its about scalability. You need social, emotional skills, you need technical skills, you need digital skills. Its a major transformation, and its all about ecosystem, he said. Earlier this year, OpenAI CTO Greg Brockman also posited the idea that developing emotional fortitude can be a necessary prerequisite for tackling the technical details of AI.

Bughin also recommends that startups recognize theres a bigger picture than their own company. Its really about not only you as an entrepreneur, but an ecosystem of entrepreneurship, Bughin said. It matters not only because as a small startup you want to make money, but to make money you need a market.

Finally, Bughin recommends governments and businesses invest in the growth of an AI ecosystem, but that funding of the eight major areas laid out in the AI Readiness Index needs to be ongoing and not a fleeting investment for a few years.

If you want the revenue of the market, you need to stand there for quite a while, he said. Its not the game of three years. Its a game of 10 to 15 years.

Another route to differentiate Europe from the U.S. and China is a more privacy-driven approach built on the back of human rights-respecting regulation like GDPR. But when asked about the idea, Bughin said, This is a narrative, not necessarily a business model.

Bughin believes there are B2B2C opportunities in sectors like biotechnology, health care, and agriculture that can spill over into the rest of the economy. In that model, opportunities may outsize consumer-driven business models, and privacy wont carry the same importance in B2B2C as it does in the B2C space.

At TechBBQ, Digital Hub Denmark spoke onstage about opportunities and challenges Europe faces due to AI. With a prominent spot directly across from the mainstage, the organization made to promote entrepreneurship also hosted an AI design sprint workshop and a discussion among about a half dozen AI startups like 2021.ai and Neural AI on topics like how to create a Danish AI cluster.

Digital Hub Denmark CEO Camilla Rygaard-Hjalsted thinks Europe will never catch up with the AI investment flowing to businesses in the United States and China, but that Europe can still become a global leader.

I strongly believe that we can become frontrunners within an ethical application of AI in our societies, she said. In the short run, the stronger European regulation compared to China and the U.S. in this field might decrease our ability to scale revenue; however, in the long run, this focus on AI for the people can serve as our competitive advantage, and we become [a] role model for the rest of [the] world one can only hope.

Above: A timeline of major events in AI history dating back to the 1950s created by artist Hjotefar.

Image Credit: Digital Hub Denmark

Like Bughin, she believes AI will be an important driver of GDP in Europe and that talent shortage will be a major issue in the decade ahead. To support continued growth of a European AI ecosystem, she supports the acceleration of digital frontrunner companies and ensuring that startups gain access to public data.

One example of extraordinary access to public data growing a business comes from Corti, a Danish company that recorded 112 conversations with emergency operators in order to create a deep learning algorithm that can detect cardiac arrest events via phone calls.

Rygaard-Hjalsted also believes Denmarks aggressive climate change goal to reduce greenhouse gas emissions by 70% by 2030 compared to 1990 levels could attract talent.

Todays scarce resource is really talent. As the CEO of Digital Hub Denmark, I believe that the combination of AI for the people and the relentless effort to solve the rising climate issues will make us attractive to international AI talent looking for purpose and thus provide the international investments needed to scale climate solutions, she said.

Anna Metsranta is a business designer at Solita, a B2B company that helps other businesses get on the path to becoming AI companies by digitizing their operations, helping them become data-driven, and developing AI models.

One of the biggest challenges she spelled out during a panel conversation about the European AI ecosystem is how hype and a lack of basic understanding keeps business leaders from taking decisive action.

The problem with the inflated expectations caused by the hype is that when senior management expects miracles, and they expect that they can just pour all of the data into this magical black box called AI, and fantastic insight will come out of it, they dont see the potential of the realistic use cases, which might be quite modest, she said. And they should be modest to get started with the technology to start growing your maturity and your understanding. That [expectation] leads to lack of funding, [and then] we cant get companies to fund these initiatives.

In other words, hype inflates expectations, while low levels of understanding leads to a lack of vision among business executives.

If you dont understand the technology, then you firstly dont understand its possibilities. And this leads to a lack of vision; you cant think What could I do with this technology? How could it help my business transform? Thats one problem. The other problem is that you dont see its limitations. Then you buy into this ridiculous hype, these sensationalist news headlines that typically state AI can do anything or its a threat to humanity that will take all of our jobs and then it will kill us all off, she said.

Some executives try to buy their way out of learning these things by hiring a lot of data scientists. Data-driven companies need data scientists, but hiring alone doesnt work because business leaders still have to make decisions about where the company is headed, Metsranta said.

AI will become ubiquitous in business the same way AI is becoming ubiquitous in smartphones, she said. So in order to avoid the negative impact of inaccurate expectations and ensure funding for AI projects, she prescribes more education for business executives and killing the myth of The Terminator scenario in AI.

In response to Metsrantas call for more informed opinions on AI, Christian Hannibal, director of digital policy at Dansk Industri, suggested more programs like an AI public education initiative launched in Finland last year. In June 2018, the University of Helsinki and Finnish tech firm Reaktor launched the Elements of AI course to demystify the technology, with the goal of educating 1% of the Finnish population.

More than 200,000 people have completed the free course thus far, according to the Elements of AI website.

I would very much like to see this initiative rolled out on a European scale, because if theres something Europe can do that the U.S. and China havent done, [it] is to democratize the knowledge of AI so that we go beyond the hype and give a lot more people insights about what the technology can do in their trucking companies and sawmills and hospitals and whatnot, he said.

AI conversations onstage at TechBBQ revolved around a sense of urgency that Europe needs to make strides now to be considered alongside the United States and China. Some of the ways Europe can get there, like the need for R&D spending or funding for startups, are the same as anywhere else in the world. But speakers at TechBBQ working with both large corporations and startups seem to believe Europe can also lean on its unique assets like aggressive climate change initiatives and privacy regulation.

If Europe can leverage its distinct advantages, even if it cant catch up in total venture capital spending, it could successfully create a vision of what the world can be with AI thats different than the Chinese model that generally bends toward the state and the U.S. model that generally bends toward corporations.

Continued here:

How Europes AI ecosystem could catch up with China and the U.S. - VentureBeat

Posted in Ai

Watch AI help basketball coaches outmaneuver the opposing team – Science Magazine

By Edd GentSep. 27, 2019 , 8:00 AM

When it comes to teaching basketball players how to execute a winning drive to the hoop, a tactic board can be a coachs best friend. But this top-down view of the court has a major limitation: It doesnt reveal how the opposing team will respond. A new program powered by artificial intelligence (AI) could change that.

Heres how the technology works. A coach sketches plays on a virtual tactic board on theircomputer, representing their own players as red dots and the defending team as blue dots. Once they drag their virtual players around to indicate movements and passes, an AI program trained with player movement data from the National Basketball Association converts these simplified sketches into a realistic simulation of how both offensive and defensive players would move during the play.

The underlying mechanism is a generative adversarial network, which pits two AI programs against each other. One takes sketches and tries to generate realistic player movements; the other provides feedback on how closely these match real-world data. Over time, this results in increasingly realistic plays.

The system could show coaches and players how defenders are likely to react to new movesand how they should, in turn, change their tactics, the researchers will report next month at the Association for Computing Machinery International Conference on Multimedia inNice, France. Although basketball fans and nonfans couldnt reliably distinguish simulations from real plays, top-level players often could. That suggests the movements are still not entirely realistic, and the model still needs refinement.

Continue reading here:

Watch AI help basketball coaches outmaneuver the opposing team - Science Magazine

Posted in Ai

Artificial Intelligence Is Creating New Art From The Work Of A Deceased Manga Great – Kotaku

Kotaku EastEast is your slice of Asian internet culture, bringing you the latest talking points from Japan, Korea, China and beyond. Tune in every morning from 4am to 8am.

One of Japans greatest, if not the greatest, manga creators is having his work tapped into with the power of AI. Osamu Tezuka died in 1989, but next year, artificial intelligence will create new art based on Tezukas work.

Tezuka is known for influential works like Astro Boy, Princess Knight, and Kimba the White Lion.

Toshibas latest data project is called Kioxia and using its high-speed, large-capacity memory, artificial intelligence will create new Tezuka art based on immense digitalized volumes of work the artist produced during his lifetime.

The project is supported by Tezuka Productions.

Clarification: The headline of this post was altered slightly for clarity, and the language regarding the use of AI was changed to reflect the fact that new art was being created based on Tezukas work.

Read this article:

Artificial Intelligence Is Creating New Art From The Work Of A Deceased Manga Great - Kotaku

Posted in Ai

Second AI Awards See A 50% Increase in Entries from A Diverse Range of Organisations – socPub

Accenture, AIB, ESB, National Transport Authority, INFANT Centre amongst those recognised for innovation in Artificial Intelligence.

Entries from Trinity College, UCC, Waterford Institute of Technology and DCU reflect increase in academic research in AI.

Nominations demonstrate innovation across health, customer service, and science.

Dublin October 1st, 2019: Today, AI Ireland received a record 50% increase in entries in its second year, showcasing innovation in Artificial Intelligence across healthcare, finance, customer service, communications, and academia.

The second AI Awards 2019 will be presented at the Gibson Hotel on Wednesday 20th November. These Awards, which are part of the not-for-profit organisation AI Ireland, support the growth and development of Data Science, Machine learning, and Artificial Intelligence in Ireland.

This year also saw the introduction of a new category for Intelligent Automation-Best Use of RPA (Robotic Process Automation) to reflect the rapidly growing sector within AI in the Irish market.

The increase in nominations shows the Irish AI sector is innovating with potentially game changing outcomes for organisations and society, said Mark Kelly, Chief Customer Officer at Alldus & Founder of AI Ireland.

The sheer breadth of AI and Machine Learning, research, development and implementation across private, public and academic organisations is exciting and shows how we are embracing the potentially massive opportunities and benefits that AI can bring.

Cathriona Hallahan, Managing Director, Microsoft Ireland, At Microsoft, we are infusing AI into everything we deliver, across our computing platforms and experiences, as we believe democratising access to intelligence will help solve the worlds most pressing challenges. We are committed to driving AI adoption and innovation in Ireland which will support the ambition of Ireland being a digital leader in Europe. The AI awards are key to fostering and recognising homegrown talent and entrepreneurship and we are delighted to support the awards for a second year and to see to the quality of the applications increase. We look forward to the exciting innovations that will be showcased at the ceremony in November in the hope they will enable more people and organisations to do more into the future.

The 2019 AI Awards Shortlists.

Best Application of AI in a Large Enterprise

Accenture for their Job Matching solution.

Allied Irish Bank (AIB) for their AIB Services Insights Project.

Johnson Controls for their first-of-its-kind AI-powered security product, known as Converged Cyber-Physical Security (CCS).

Mastercard Labs for Duka Connect, a Mobile Point of Sale (mPoS) solution for small merchants in emerging markets.

Best Use of AI in a Consumer/Customer Service Application

Accenture for their Knowledge Exchange (KX) AI integration.

Idiro Analytics for their work with Digicel in integrating AI into customer services to reduce churn.

SAP for their Business Operations and Self-Healing (BoSh), an out of the box AI platform to support business automation.

Webio for their Conversational Middleware Service enabling organisations to connect applications, digital assets across different communications platforms (SMS, WhatsApp to Alexa).

Best Application of AI in a Student Project

Meredith Telford, Ulster University for her work on using AI to accelerate production of 3D printed cardiac models using machine learning, improving diagnosis and enabling surgeons to practice virtually before surgery.

Cian Vaughan, National College of Ireland for his work integrating movement and gesture recognition for Irish Sign Language to help deaf people fluently interact with devices in the future.

Ciaran O'Mara, University of Limerick for his work on Machine Learning Based Traffic Network Analysis Tool, using AI for traffic management.

Rory Boyle, Trinity College Dublin for his work on brain predicted age difference score (brainPAD), a way of representing brain health outside of a patients chronological age.

Best Use of AI in Sector

Liopa for their LipRead technology that uses AI in Visual Speech Recognition (VSR) or automated lip reading.

National Transport Authority - for their use of AI to better analyse data to provide high-quality accessible and sustainable transport solutions nationwide.

Soapbox Labs for their work on developing speech recognition technology for children.

TVadSync for its Smart TV based Automatic Content Recognition (ACR) that provides brands and marketers insight into the ad effectiveness of their media campaigns as well as deep behavioural analysis of their customer base.

NEW AWARD! Intelligent Automation - Best Use of RPA & Cognitive

Doosan Bobcat for their work it UiPath to use Robotic Process Automation to automate tasks for employees, delivering savings of 400 hours per month.

ESB for using AI to optimise its rollout of smart meter technology.

HealthBeacon for their digitally connected Smart Sharp Bin service that supports patients who self-inject medications at home to ensure adherence to medical treatment schedule.

McKesson for their work on integrating AI tools like RPA to standardise approach to incorporate the various sources of data and systems that combine to allow the day-to- day activity to occur across Europe.

Best Application of AI in an Academic Research Body

Connect Centre, Dublin City University for their work using AI to optimise superfast internet speeds by tackling non-linear distortions in optical fibre networks.

Connect Centre, Waterford Institute of Technology for their work on SmartHerd, an IOT based system to predict lameness in dairy cattle.

INFANT Centre, University College Cork for their work using deep learning to detect neonatal seizure detection.

Sigmedia, Trinity College Dublin for their work in combining visual and speech cues in a speech recognition system.

Best Application of AI in a Startup

Getvisibility for their use of Machine Learning and Natural Language Processing to discover and categorise unstructured data sets.

Rinocloud for their work in using AI to reduce time and risk in research and development and enhancement of skin disease remedies affecting 3% of the global population and 20% of children under 10 years of age.

Telenostic for their work on deep learning or convolution neural networks (CNNs) to accurately model and predict Parasitic Infections (PI) within animals.

Truata for their work in providing a new standard in data hosting and anonymisation. Using proprietary processes, methodologies and intellectual property, the solution makes it possible for organizations to analyse their data while complying with privacy and data protection regulations.

Microsoft Ireland is the principal sponsor of the 2019 AI Awards. The awards will also be further supported by the IDA Ireland, Alldus, ISG, McKesson, Mazars, Mason Hayes Curran, the ADAPT Centre and GeoDirectory.

For more information, visit http://www.aiawards.ie

View post:

Second AI Awards See A 50% Increase in Entries from A Diverse Range of Organisations - socPub

Posted in Ai

AI may be as effective as medical specialists at diagnosing disease – CNN

Researchers carried out the first systematic review of existing research into AI in the health sector and published their findings in The Lancet Digital Health journal.

It focused on an AI technique called deep learning, which employs algorithms, big data, and computing power to emulate human intelligence.

This allows computers to identify patterns of disease by examining thousands of images, before applying what they learn to new individual cases to provide a diagnosis. Excitement is building around the technology, and the US Food and Drug Administration has already approved a number of AI algorithms for use in healthcare.

AI has been hailed as a way to reduce the workload for overstretched medical professionals and revolutionize healthcare, but so far scientific research has failed to live up to the hype.

Of the 20,500 articles reviewed, fewer than 1% were found to be sufficiently robust, said Professor Alastair Denniston from University Hospitals Birmingham NHS Foundation Trust, UK, which led the research, in a statement.

"Within those handful of high-quality studies, we found that deep learning could indeed detect diseases ranging from cancers to eye diseases as accurately as health professionals," said Denniston.

"But it's important to note that AI did not substantially out-perform human diagnosis."

Using data from 14 studies, researchers found that deep learning algorithms correctly detected disease in 87% of cases, compared to 86% for healthcare professionals.

AI was also able to correctly identify those patients free from disease in 93% of cases, compared to 91% for healthcare professionals.

While these results are promising, the researchers say better research and reporting is needed to improve our knowledge of the true power of deep learning in healthcare settings.

This will involve better study design, including the testing of AI in situations that are the same as those that healthcare professionals work in.

"Evidence on how AI algorithms will change patient outcomes needs to come from comparisons with alternative diagnostic tests in randomized controlled trials," said Livia Faes, from Moorfields Eye Hospital, London, in a statement.

"So far, there are hardly any such trials where diagnostic decisions made by an AI algorithm are acted upon to see what then happens to outcomes which really matter to patients, like timely treatment, time to discharge from hospital, or even survival rates."

Experts hailed the review while emphasizing the need for further research.

"The big caveat is, in my opinion, that the story is not 'AI may be as good as health professionals', but that 'the general standard of evaluating performance of AI is shoddy,'" said Franz Kiraly of University College London.

Nils Hammerla of Babylon Healthcare, a company that says it uses AI technology to improve the affordability and accessibility of healthcare, believes more work is needed before AI can reach its full potential.

"Machine learning can have a massive impact on problems in healthcare, big and small, but unless we can convince clinicians and the public of its safety and ability then it won't be much use to anybody," he said.

The global market for AI in healthcare is surging and is expected to rise from $1.3 billion in 2019 to $10 billion by 2024, according to investment bank Morgan Stanley.

Hospitals around the world are already making use of the technology, including Moorfields Eye Hospital in London.

Doctors are able to use an algorithm developed by DeepMind, a UK-based AI research center owned by Google, to return a detailed diagnosis in around 30 seconds using Optical Coherence Tomography (OCT) scans.

The AI technology, called DeepGestalt, outperformed clinicians in identifying a range of syndromes in three trials and could add significant value in personalized care.

Read more from the original source:

AI may be as effective as medical specialists at diagnosing disease - CNN

Posted in Ai