Why Neuroscience Is the Key To Innovation in AI – Singularity Hub

The future of AI lies in neuroscience.

So says Google DeepMinds founder Demis Hassabis in a review paper published last week in the prestigious journal Neuron.

Hassabis is no stranger to both fields. Armed with a PhD in neuroscience, the computer maverick launched London-based DeepMind to recreate intelligence in silicon. In 2014, Google snagged up the company for over $500 million.

Its money well spent. Last year, DeepMinds AlphaGo wiped the floor with its human competitors in a series of Go challenges around the globe. Working with OpenAI, the non-profit AI research institution backed by Elon Musk, the company is steadily working towards machines with higher reasoning capabilities than ever before.

The companys secret sauce? Neuroscience.

Baked into every DeepMind AI are concepts and ideas first discovered in our own brains. Deep learning and reinforcement learningtwo pillars of contemporary AIboth loosely translate biological neuronal communication into formal mathematics.

The results, as exemplified by AlphaGo, are dramatic. But Hassabis argues that its not enough.

As powerful as todays AIs are, each one is limited in the scope of what it can do. The goal is to build general AI with the ability to think, reason and learn flexibly and rapidly; AIs that can intuit about the real world and imagine better ones.

To get there, says Hassabis, we need to closer scrutinize the inner workings of the human mindthe only proof that such an intelligent system is even possible.

Identifying a common language between the two fields will create a virtuous circle whereby research is accelerated through shared theoretical insights and common empirical advances, Hassabis and colleagues write.

The bar is high for AI researchers striving to bust through the limits of contemporary AI.

Depending on their specific tasks, machine learning algorithms are set up with specific mathematical structures. Through millions of examples, artificial neural networks learn to fine-tune the strength of their connections until they achieve the perfect state that lets them complete the task with high accuracymay it be identifying faces or translating languages.

Because each algorithm is highly tailored to the task at hand, relearning a new task often erases the established connections. This leads to catastrophic forgetting, and while the AI learns the new task, it completely overwrites the previous one.

The dilemma of continuous learning is just one challenge. Others are even less defined but arguably more crucial for building the flexible, inventive minds we cherish.

Embodied cognition is a big one. As Hassabis explains, its the ability to build knowledge from interacting with the world through sensory and motor experiences, and creating abstract thought from there.

Its the sort of good old-fashioned common sense that we humans have, an intuition about the world thats hard to describe but extremely useful for the daily problems we face.

Even harder to program are traits like imagination. Thats where AIs limited to one specific task really fail, says Hassabis. Imagination and innovation relies on models weve already built about our world, and extrapolating new scenarios from them. Theyre hugely powerful planning toolsbut research into these capabilities for AI is still in its infancy.

Its actually not widely appreciated among AI researchers that many of todays pivotal machine learning algorithms come from research into animal learning, says Hassabis.

An example: recent findings in neuroscience show that the hippocampusa seahorse-shaped structure that acts as a hub for encoding memoryreplays those experiences in fast-forward during rest and sleep.

This offline replay allows the brain to learn anew from successes or failures that occurred in the past, says Hassabis.

AI researchers snagged the idea up, and implemented a rudimentary version into an algorithm that combined deep learning and reinforcement learning. The result is powerful neural networks that learn based on experience. They compare current situations with previous events stored in memory, and take actions that previously led to reward.

These agents show striking gains in performance over traditional deep learning algorithms. Theyre also great at learning on the fly: rather than needing millions of examples, they just need a handful.

Similarly, neuroscience has been a fruitful source of inspiration for other advancements in AI, including algorithms equipped with a mental sketchpad that allows them to plan convoluted problems more efficiently.

But the best is yet to come.

The advent of brain imaging tools and genetic bioengineering are offering an unprecedented look at how biological neural networks organize and combine to tackle problems.

As neuroscientists work to solve the neural codethe basic computations that support brain functionit offers an expanding toolbox for AI researchers to tinker with.

One area where AIs can benefit from the brain is our knowledge of core concepts that relate to the physical worldspaces, numbers, objects, and so on. Like mental Legos, the concepts form the basic building blocks from which we can construct mental models that guide inferences and predictions about the world.

Weve already begun exploring ideas to address the challenge, says Hassabis. Studies with humans show that we decompose sensory information down into individual objects and relations. When implanted in code, its already led to human-level performance on challenging reasoning tasks.

Then theres transfer learning, the ability that takes AIs from one-trick ponies to flexible thinkers capable of tackling any problem. One method, called progressive networks, captures some of the basic principles in transfer learning and was successfully used to train a real robot arm based on simulations.

Intriguingly, these networks resemble a computational model of how the brain learns sequential tasks, says Hassabis.

The problem is neuroscience hasnt figured out how humans and animals achieve high-level knowledge transfer. Its possible that the brain extracts abstract knowledge structures and how they relate to one another, but so far theres no direct evidence that supports this kind of coding.

Without doubt AIs have a lot to learn from the human brain. But the benefits are reciprocal. Modern neuroscience, for all its powerful imaging tools and optogenetics, has only just begun unraveling how neural networks support higher intelligence.

Neuroscientists often have only quite vague notions of the mechanisms that underlie the concepts they study, says Hassabis. Because AI research relies on stringent mathematics, the field could offer a way to clarify those vague concepts into testable hypotheses.

Of course, its unlikely that AI and the brain will always work the same way. The two fields tackle intelligence from dramatically different angles: neuroscience asks how the brain works and the underlying biological principles; AI is more utilitarian and free from the constraints of evolution.

But we can think of AI as applied (rather than theoretical) computational neuroscience, says Hassabis, and theres a lot to look forward to.

Distilling intelligence into algorithms and comparing it to the human brain may yield insights into some of the deepest and most enduring mysteries of the mind, he writes.

Think creativity, dreams, imagination, andperhaps one dayeven consciousness.

Stock Media provided by agsandrew / Pond5

Continue reading here:

Why Neuroscience Is the Key To Innovation in AI - Singularity Hub

What Is The Artificial Intelligence Revolution And Why Does It Matter To Your Business? – Forbes

As a species, humanity has witnessed three previous industrial revolutions: first came steam/water power, followed by electricity, then computing. Now, were in the midst of a fourth industrial revolution, one driven by artificial intelligence and big data.

What Is The Artificial Intelligence Revolution And Why Does It Matter To Your Business?

I like to refer to this as the Intelligence Revolution." But whatever we call it the fourth industrial revolution, Industry 4.0 or the Intelligence Revolution one thing is clear: this latest revolution is going to transform our world, just as the three previous industrial revolutions did.

What makes AI so impactful, and why now?

AI gives intelligent machines (be they computers, robots, drones, or whatever) the ability to think and act in a way that previously only humans could. This means they can interpret the world around them, digest and learn from information, make decisions based on what theyve learned, and then take appropriate action often without human intervention. Its this ability to learn from and act upon data that is so critical to the Intelligence Revolution, especially when you consider the sheer volume of data that surrounds us today. AI needs data, and lots of it, in order to learn and make smart decisions. This gives us a clue as to why the Intelligence Revolution is happening now.

After all, AI isnt a new concept. The idea of creating intelligent machines has been around for decades. So why is AI suddenly so transformative? The answer to that question is two-fold:

We have more data than ever before. Almost everything we do (both in the online world and the offline world) creates data. Thanks to the increasing digitization of our world, we now have access to more data than ever before, which means AI has been able to grow much smarter, faster, and more accurate in a very short space of time. In other words, the more data intelligent machines have access to, the faster they can learn, and the more accurate they become at interpreting the information. As a very simple example, think of Spotify recommendations. The more music (or podcasts) you listen to via Spotify, the better able Spotify is to recommend other content that you might enjoy. Netflix and Amazon recommendations work on the same principle, of course.

Impressive leaps in computing power make it possible to process and make sense of all that data. Thanks to advances like cloud computing and distributed computing, we now have the ability to store, process, and analyze data on an unprecedented scale. Without this, data would be worthless.

What the Intelligence Revolution means for your business

I guarantee your business is going to have to get smarter. In fact, every business is going to have to get smarter from small startups to global corporations, from digital-native companies to more traditional businesses. Organizations of all shapes and sizes will be impacted by the Intelligence Revolution.

Take a seemingly traditional sector like farming. Agriculture is undergoing huge changes, in which technology is being used to intelligently plan what crops to plant, where and when, in order to maximize harvests and run more efficient farms. Data and AI can help farmers monitor soil and weather conditions, and the health of crops. Data is even being gathered from farming equipment, in order to improve the efficiency of machine maintenance. Intelligent machines are being developed that can identify and delicately pick soft ripe fruits, sort cucumbers, and pinpoint pests and diseases. The image of a bucolic, traditional farm is almost a thing of the past. Farms that refuse to evolve risk being left behind.

This is the impact of the Intelligence Revolution. All industries are evolving rapidly. Innovation and change is the new norm.Those who cant harness AI and data to improve their business whatever the business will struggle to compete.

Just as in each of the previous industrial revolutions, the Intelligence Revolution will utterly transform the way we do business. For your company, this may mean you have to rethink the way you create products and bring them to market, rethink your service offering, rethink your everyday business processes, or perhaps even rethink your entire business model.

Forget the good vs bad AI debate

In my experience, people fall into one of two camps when it comes to AI. Theyre either excited at the prospect of a better society, in which intelligent machines help to solve humanitys biggest challenges, make the world a better place, and generally make our everyday lives easier. Then there are those who think AI heralds the beginning of the end, the dawning of a new era in which intelligent machines supersede humans as the dominant lifeform on Earth.

Personally, I sit somewhere in the middle. Im certainly fascinated and amazed by the incredible things that technology can achieve. But Im also nervous about the implications, particularly the potential for AI to be used in unethical, nefarious ways.

But in a way, the debate is pointless. Whether youre a fan of AI or not, the Intelligence Revolution is coming your way. Technology is only going in one direction forwards, into an ever-more intelligent future. Theres no going back.

Thats not to say we shouldnt consider the implications of AI or work hard to ensure AI is used in an ethical, fair way one that benefits society as well as the bottom line. Of course, we should do that. But it's important to understand that; however, you feel about it, AI cannot be ignored. Every business leader needs to come to terms with this fact and take action to prepare their company accordingly. This means working out how and where AI will make the biggest difference to your business, and developing a robust AI strategy that ensures AI delivers maximum value.

AI is going to impact businesses of all shapes and sizes, across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

Read more here:

What Is The Artificial Intelligence Revolution And Why Does It Matter To Your Business? - Forbes

Gov.UK pops open tin of AI and robotics research cash – The Register

The UK government's long-promised Industrial Strategy Challenge Fund is open for business.

Jumping swiftly on the AI bandwagon, the first lump of cash to be awarded through the multimillion-pound fund will be for robotics and artificial intelligence.

The fund, announced back in November 2016, was conceived as part of wider plans to demonstrate the government was taking industrial strategy seriously after former business secretary Sajid Javid who favoured an industrial "approach" was booted out in Prime Minister Theresa May's cabinet reshuffle.

The idea is to make researchers and businesses work together to tackle major industrial challenges, and the areas of focus were fleshed out in the 2017 Spring Budget.

These six areas of investment include electric vehicles, aerospace materials, and satellites.

Robotics and AI are the focus of two areas, with 93m on offer for systems that can be used in extreme environments for offshore energy, space, and deep mining, and 38m for AI and control systems for driverless cars.

The first three funding rounds to open are in the robotics and AI area, with the biggest chunk 42m being for work that will speed up the pace of fundamental research.

There is 10m available for R&D carried out with industry, which the government said must promise a "step-change in capabilities" for the use of robotics and AI in extreme environments.

A further 6m is for applicants who want to test the technical feasibility of specific technologies, systems or subsystems.

A second phase for experimental developments of fully integrated systems will run next year, but you have to apply to this round to be considered to lead a project in the second.

The basic research fund is being managed by the Engineering and Physical Sciences Research Council, while the more industry-focused ones are being run by innovation agency Innovate UK.

See the article here:

Gov.UK pops open tin of AI and robotics research cash - The Register

How AI will become omnipresent – VentureBeat

The resurgence of artificial intelligence in recent years has been fueled by both the advent of cheap, available mass processing capacity and breakthroughs in AI algorithms that allow them to scale and tackle more complex problems. Interestingly, this recent trend is reminiscent of the personal computing revolution of the 80s, when cheaper and more available computing became a catalyst for mass computerization of numerous industries. Much like AI today, computers and computerization felt cutting edge and new, so companies were setting up computing departments and computerization task forces. By the standards of those days, we are all computer specialists today.

Adoption of computers didnt come about overnight. Decades ago, there was high demand for computerization, but its implications for each industry were not clear. People sensed computers were important but werent 100 percent sure in what way. We had to go through a whole process of development and discovery, and, as a result of computer experts working hand in hand with domain experts over the course of 15 to 20 years, computers and specialized software were developed to suit different needs.

Were following a similar path with AI. Were now at the point where AI is often siloed in specialized departments and where C-suite players intuit how important AI will be but might not be sure how to approach it. Common questions today include What is AI? and How can it help my business?

Lets look at online content first, specifically website optimization. Most people now are familiar with conversion rate optimization (CRO), where site operators try to maximize conversions by testing new ideas for design, messaging, user experience, and more. AI can make this process more effective by orders of magnitude.

We need to figure out how were judging the AIs solutions and define the world in which it operates. For this example, we judge success by increased conversions (and we can choose whether that means leads or sales or whatever) and define the world as a particular website and the changes the AI can make to it (fonts, designs, colors, etc.). We can give the AI information like changes to try (dozens of messages and design ideas), as well as the ability to determine browser type or logged-in status so the AI can also start segmenting users.

What happens with this approach can be staggering. The AI can find compelling combinations of designs and the audiences those designs resonate with. It can do this by leveraging genetic algorithms, effectively breeding fitter and fitter generations of designs that create children that convert more effectively and repeating the process as the AI bends toward more optimal configuration.

Its important to note here an important aspect of this approach that fits a general definition of AI: Its autonomous. The operator sets parameters and goals, but the AI decides the combination of ideas, always trying to find a better answer and better results against that goal.

There are many more such examples of successful AI-enablement in diverse industries ranging from finance and trading to health care and even agriculture. In all cases, some form of the steps noted above need to be taken, and these decisions cannot be made in isolation. This is a collaborative process that requires domain experts and AI practitioners working closely.

But lets get back to the beginning here: What is the essence of this AI? Unfortunately I do not have an easy answer, and I sincerely doubt that there is one. For one thing, the definition seems to have changed through time, and the expectation keeps exceeding the state of the art. Rather than coming up with a strict definition, I think its actually more valuable to look at some examples to show how muddy the parameters can really be.

Is keyword search on Google considered AI? You might think that the technology behind web search is pretty straightforward, but even all the way back in the late 90s, search engines made use of the A* tree search algorithm, a technique that was taught in AI textbooks.

How about Siri? Well, surely a conversational system is an example of AI. Or is it? In the case of Siri, many attribute intelligence to its humor in answering questions like the meaning of life, or being able to tell a joke. The reality is that this aspect of Siri is based simply on a randomized look-up table. In other words, the aspects people find most lifelike are actually just engineers programming one-liners.

What about self-driving cars? Here too, most of what is being tested on the roads today, as well as the self-driving car that won the original DARPA challenge, was almost completely engineered, used sensors instead of AI, and did not have any learning capabilities.

In other words, its hard to tell whether the algorithm itself can be defined as AI or not, and I think thats not truly all that important. The important part is if the AI improves upon one or more measures as defined by domain experts. Its important if the AI models and learns the domain it operates in and is able to adapt to new circumstances and expectations. It needs to function autonomously and get better over time, no matter what we call it or how pure it is, from a definitional perspective.

The real thing we need to envision is a world where AI, like computers and the internet, is omnipresent. Because that world is coming. Its a world where AIs design themselves through evolved neural networks (this is already underway and showing promise in achieving state of the art results on benchmark problems). It is, in short, a vastly different world than the one we live in today. The definition of AI will continue to change. It will continue to become more ambitious. It will grow. And just like computerization, AI enablement will only be fully achieved once all of us can be considered AI experts by todays standards.

And that day is coming.

Babak Hodjat is the cofounder and CEO of Sentient Technologies, an AI platform.

Read more:

How AI will become omnipresent - VentureBeat

What Happens When AI is Used to Set Grades? – Harvard Business Review

Executive Summary

In 2020, with high school exams canceled in many countries, the International Baccalaureate Organization (IBO) deployed an AI to determine final grades based on current and historical data. When the results came in, many scores did not correlate with grades that had been predicted, as had been the case in previous years, prompting many people to appeal their grades. Unfortunately, the appeals system for grades had not been changed from previous years, which was assumed that students would write examination papers. Since university place offers in many countries are contingent on students achieving predicted grades, many students have been denied places at their universities of choice, which has resulted in a great deal of anger. This experience highlights the risks of delegating life-altering decisions to AI without considering how apparently anomalous decisions can be appealed and, if necessary, changed.

How would you feel if an algorithm determined where your child went to college?

This year Covid-19 locked down millions of high school seniors and governments around the world canceled year-end graduation exams, forcing examining boards everywhere to consider other ways of setting the final grades that would largely determine the future of the class of 2020. One of these Boards, the International Baccalaureate Organization (IBO), opted for using artificial intelligence (AI) to help set overall scores for high-school graduates based on students past work and other historic data. (We use the term AI broadly to mean a computer program that uses data to execute a task that humans typically perform, in this case processing student scores.)

The experiment was not a success, and thousands of unhappy students and parents have since launched a furious protest campaign. So, what went wrong and what does the experience tell us about the challenges that come with AI-enabled solutions?

The IB is a rigorous and prestigious high-school certificate and diploma program taught by some of the worlds best schools. It opens doors to the worlds leading universities for talented and hard-working students in over 150 countries.

In a normal year, final grades are determined by coursework produced by the students and a final examination administered and corrected by the IBO directly. The coursework counts for some 20-30% of the overall final grade and the exam accounts for the remainder. Prior to the exam, teachers provide predicted grades, which allow universities to offer places conditional on the candidates final grades meeting the predictions. The IBO will also arrange independent grading of samples of each students coursework in order to discourage grade inflation by schools.

The process is generally considered to be a rigorous and well-regarded assessment protocol. The IBO has collected a substantial amount of data about each subject and school hundreds of thousands of data points, in some cases going back over 50 years. Significantly, the relationship between predicted and final grades has been tight. At leading IB schools over 90% of grades have been equal to predicted, and over 95% of total scores have been within a point from that predicted (total scores are set on a scale of one to 45).

In the spring of 2020, IBO had to decide whether to allow the exams to proceed or cancel them and award grades some other way. Allowing exams risked the safety of students and teachers, and could create fairness issues if, for instance, students in some countries were allowed to write the exams at home, while in others they had to sit exams at school.

Canceling the exams raised the question of how to assign grades, and thats when IBO turned to AI. Using its trove of historical data about students course work and predicted grades, as well as the data about the actual grade obtained at exams in previous years, the IBO decided to build a model to calculate an overall score for each student in a sense predicting what the 2020 students would have gotten at the exams. The model-building was outsourced to a subcontractor undisclosed at the time of publishing this article.

A crisis erupted when the results came out in early July 2020. Tens of thousands of students all over the world received grades that not only deviated substantially from their predicted grades but did so in unexplainable ways. Some 24,000, or more than 15% of all 2020 IB diploma recipients, have since signed the protest.IBOs social media pages are flooded with furious comments.Several governments have also launched formal investigations, and numerous lawsuits are in preparation, some for data abuse under EUs GDPR. Whats more, schools, students, and families involved in other high school programs that have also adopted AI solutions are raising very similar concerns, notably in the UK, where A level results are due out on August 13th, 2020.

As the outrage has spread, one critical and very practical question has been consistently raised by frustrated students and parents: How can they appeal the grades?

In normal years, the appeals process was well-defined and consisted of several levels, from the re-marking of an individual students exam to a review of marks for course work by subject at a given school. The former means having another look at a students work a natural first step when the grades were based on such work. The latter refers to an adjustment that IBO may apply to a schools grading of course work should a sample of work independently assessed by the IBO produce substantially different grades, on average, from those awarded by the school. The appeal process was well-understood and produced consistent results, but was not used frequently, largely because, as noted, there were few surprises when the final grades came out.

This year, the IB schools initially treated appeals as requests for re-marks of student work. But this poses a fundamental challenge: the graded papers were not in dispute it was the AI assessment that was called into question. The AI did not actually correct any papers; it only produced final grades based on the data it was fed, which included teacher-corrected coursework and the predicted grades. Since the specifics of the program are not disclosed, all people can see are the results, many of which were highly anomalous, with final scores in some cases well below the marks of the teacher-graded coursework of the students involved. Unsurprisingly, the IBOs appeals approach has not met with success it is in no way aligned with the way in which the AI created the grades.

The main lesson coming out of this experience is that any organization that decides to use an AI to produce an outcome as critical and sensitive as a high-school grade marking 12-years of students work, needs to be very clear about how the outcomes are produced and how they can be appealed in the event that they appear anomalous or unexpected. From the outside, it looks as though the IBO may have simply plugged the AI into the IB system to replace the exams and then assumed that the rest of the system in particular the appeals process could work as before.

So what sort of appeals process should the IBO have designed? First of all, the overall process of scoring and, more important, appealing the decision should be easy to explain, so that people understand what each next step will be. Note that this is not about explaining the AI black box, as current regulators do when arguing about the need for explainable AI. That would be almost impossible in many cases, since understanding the programming used in an AI generally requires a high level of technical sophistication. Rather, it is about making sure that people understand what information is used in assessing grades and what the steps are in the appeal process itself. So what the IBO could have done instead was offer appellants the right to a human-led re-evaluation of anomalous grades, specify what input data the appeal committee would focus on in reanalyzing the case, and explain how the problem would be fixed.

How the problem would be fixed would depend on whether the problem turned out to be student specific, school specific, or subject specific; a single students appeal might well affect other students depending on what components of the AI the appeal may relate to.

If, for example, a problem with an individual students grade seems to be driven by the school level data possibly a number of students studying in that same school have had final grades that differed markedly from their predicted grades then the appeal process would look at the grades of all students in that school. If needed, the AI algorithm itself would be adjusted for the school in question, without however affecting other schools, making sure the new scores provided by the AI are consistent across all schools while remaining the same for all but one school. In contrast, if the problem is linked to factors specific to the student, then the analysis would focus on identifying why the AI produced an anomalous outcome for that student and, if needed, re-score that student and any other student whose grades were affected in the same way.

Of course, much of this would be true of any grading process one students anomaly might signal a more systematic failing in any grading process whether or not an AI is engaged. But the way in which the appeal process is designed needs to reflect the different ways in which humans and machines make decisions and the specific design of the AI used as well as how the decisions can be corrected.

For example, because AI awards grades on the basis of its model of relationships between various input data, there should generally be no need to look at the actual work of the students concerned, and corrections could be made to all affected students (those with similar input data characteristics) all at once. In fact, in many ways appealing an AI grade could be an easier process than appealing a traditional exam-based grade.

Whats more, with an AI system, an appeals process along the lines described would enable continuous improvement to the AI. Had the IBO put such a system in place, the results of the appeals would have produced feedback data that could have updated the model for future uses in the event, say, that examinations are again cancelled next year.

***

The IBOs experience obviously has lessons for deploying AI in many contexts from approving credit, to job search or policing. Decisions in all these cases can, as with the IB, have life altering consequences for the people involved. It is inevitable that disputes over the outcomes will occur, given the stakes involved. Including AI in the decision-making process without carefully thinking through an appeals process and linking the appeals process to the algorithm design itself will likely end not only with new crises but potentially with a rejection of AI-enabled solutions in general. And that deprives us all of the potential for AI, when combined with humans, to dramatically improve the quality of decision-making.

Disclosure: One of the authors of this article is the parent of a student completing the IB program this year.

See more here:

What Happens When AI is Used to Set Grades? - Harvard Business Review

New Microsoft Report Claims U.K. Is Behind The Rest Of The World On AI – Forbes

Organizations currently using AI outperform those that don't by 11.5%. Despite this, only 24% have ... [+] an AI strategy in place.

Anew report, unveiled October 1 by Microsoft UK, claims thatBritish organizationsrisk being overtaken by their global counterparts unless the use of artificial intelligence (AI) technology is accelerated.

The report, conducted by YouGov andin partnership with Goldsmiths, University of London, focused on more than 1,000 business leaders and 4,000 employees, and includes interviews with leading industry experts fromorganizationssuch as M&S, NatWest, Renault F1Team, Lloyds Banking Group and the NHS. Its findings demonstratethat organizations currently using AI outperform those that don't by 11.5% but despite this, only 24%have an AI strategy in place.

The U.K. is also at risk of falling further behind the likes of the U.S. and China if attitudes to AI remain the same given that74% of the nations business leaders doubt the U.K. even has the socio-economic structures in place to lead in AI on the global stage.

Cindy Rose, CEO of Microsoft UK had a clear message for organizations that might be slow on the uptake of AI:

U.K. businesses and public sector organisations that forgo or delay implementing AI solutions risk missing the boat on driving down costs, increasing their competitive advantage and empowering their workers. Given this moment, where both U.K. leadership and competitiveness on the global stage is more vital than ever, there is no doubt that fully embracing AI-led digital transformation is a critical success factor for U.K. businesses, government and society.

AI In Healthcare

Microsofts report found that U.K. healthcare is actually at the forefront of AI innovation, with almost half (46%) of organizations reporting that they use AI. Last year saw an increase of 8%, with the biggest leaps made inresearch, robotic process automation (RPA) and other automation, as well as voice recognition and touchscreen technology. That said, primarily, AI is still restricted to small,localized pilotprojects, rather than big contracts.

A robotic arm for brain surgery is seen at the 2019 World Robot Conference in Beijing on August 20, ... [+] 2019. In healthcare, the biggest AI leaps have been made in research, robotic process automation (RPA) and other automation, as well as voice recognition and touchscreen technology.

Progressing thisexperimentation to full implementation will certainly require a culture-shift and the report identified some interesting challenges, namely:

Clearly, there are two main areas of significant improvement that organizations must focus on to increase uptake and value of AI: communication between staff and universal understanding of AI amongst theworkforce.

I spoke to Clare Barclay, chief operating officer at Microsoft UK about how Microsoft intends to address these conclusions and plans are already underway for an education program called the AI Business School:

We have developed the AI Business School, tailored for healthcare, to train healthcare professionals in a non-confrontational way. Were thinking about how we truly help leaders understand the technology, the culture, the strategy and the ethical implications.Someprogramswill be tailored for a specific customer, like a hospital, and outside of that we will be running a set of programmes in-store and at other locations across the U.K.Leaders willhear from other healthcare professionals, startups, technology providers etc. so they can understand and have meaningful conversations about AI. We've also committed to training 30,000 front-line staff."

Microsoft have committed to training 30,000 front-line staff and leaders at its AI Business School

Microsofts healthcare industry lead, Stephen Docherty, isfocused onensuringpractical benefits arise from this report. He was previously Chief InformationOfficer (CIO) at South London and Maudsley NHS Trust (SLAM) and knows the issues at thecoalfaceonly too well, as well as the benefits AI could bring if implemented correctly.

On the AI Business School, he says it will be important in enabling all healthcareworkersto have conversations that lead to change;talking in the language of value propositions, culture, data and ethics."Being from the front-line myself, I can see huge value in this if executed well.

Overall, for Docherty, the report was positive as it showed that people are beginning to use AI, but hes now keen to see the advantages at scale:

The biggest thing for me is around clinician time. When I was a CIO, I sawpeople having to feedcompliance information into multiple systems, using multiple logins, getting frustrated and burning out. Eric Topol talked about giving people inhealthcare the gift of time and AI can really make people's daily lives much better. But tomakethe most impact, everyone needs to be brought up to speed on AI; you need a clear digitalstrategy and then afocus onadoption.

Barclay and Docherty both describe howEast Suffolk and North Essex NHS Foundation Truststarted using AI to reduce their admin burden. There was asense of fear among the workforce that the technology would displace jobs, however it took a significant amount of work away from their healthcare professionals,saving 4,500 hours for staff in the past 12 months. Importantly, this meant eyes off paperwork and eyes back onto patients for that time. Barclays favorite part of the story is that the AI system is now embraced as part of the team and has even beenhumanizedwith a name.Quirky, perhaps, but thisdoes point to importance of creating the right culture whilst implementing technology.

Dr Yeshwanth Pulijala is the founder of Scalpel, an emerging healthtech startup in the U.K. that uses A.I. (computer vision and data analytics) to reduce preventable surgical errors and improve operating room efficiency. He agrees with the report, has first-hand experience of the disparity in knowledge and experience of AI and has a lesson for AI companies in the healthcare space:

In my experience, the best way to achieve adoption of AI technology is to introduce frontline clinicians, patients and policymakers in the very early stages of product development. Ive only found a few hospitals in the U.K. so far that really understand the potential of AI at its core. They are our torchbearers and were piloting at six such hospitals to demonstrate improved levels of patient safety." On the AI Business School Pulijala says it would be a great way to scale this model.

To be effective, reports need to lead to action. Ive seen, read and even written recommendations that go unnoticed and are doing little more than collecting dust on shelves. Its now up to the relevant teams to deliver and its refreshing to hear Dochertys front-line, execution-focussed attitude at Microsoft to see themthrough to action:

Weve talked about it a lot. It's time to get on with it now."

Follow this link:

New Microsoft Report Claims U.K. Is Behind The Rest Of The World On AI - Forbes

Google’s AI subsidiary DeepMind is partnering with another UK hospital – The Verge

Googles AI subsidiary DeepMind is continuing to partner with new UK hospitals, announcing today that its Streams app will be used by the Taunton and Somerset NHS Foundation Trust. This is the first time that Streams has been introduced outside of London. The app doesnt use artificial intelligence, but sifts data from patients medical records to warn doctors and nurses about upcoming health problems.

According to a statement from the Taunton and Somerset NHS Foundation Trust, the Streams app will allow clinical staff to view the results of x-rays, scans or blood tests, in one place at the touch of a button. One doctor quoted by the Trust said the app was being used to improve early detection of seriously unwell patients and ensure a very rapid response.

DeepMinds activities in the UK have been criticized in recent months, with a government data advisor warning in May that the companys access to patient medical data had been conducted on an "inappropriate legal basis. These comments were made in reference to an earlier contract DeepMind had with the Royal Free Trust a collection of hospitals in London. The contract has since been replaced, but the original (and the Royal Free) are still under investigation by the UK data watchdog, the Information Commissioner's Office (ICO).

DeepMind has worked hard to reassure the public that their data is being safely handled, and the company stresses that its parent company Alphabet will not have access to any medical information. However, its possible that this latest deal will still come under scrutiny, with BBC News reporting that patients will not have the option to opt out from data sharing. This decision, though, is made by individual hospitals, rather than DeepMind itself. The Verge has reached out to the Taunton and Somerset Trust to confirm the details of the contract.

Continue reading here:

Google's AI subsidiary DeepMind is partnering with another UK hospital - The Verge

Astro aims to fix your email mess with an AI chatbot – The Verge

Do you want a chatbot to help you manage email overload?

Thats the question that Astro has to answer now that its officially launching on iOS and the Mac. Its the simplest, quickest way to describe what Astro offers today. And when you put it that way, the answer is almost surely no. But Astro has bigger ambitions than just cramming a chat interface into your email. It has ambitions to become the default AI system that can talk to all of your work software. Its starting with email, but your calendar, sales software, task management suite, and all the rest are meant to come next.

But to start, Astro simply presents itself as a straight forward email app that works well with both Exchange and Gmail precisely the thing that umpteen, if not dozens of startups have tried before. For Astro, getting a native mail client across Macs and iPhones is just table stakes. But it has to get those stakes right where so many others have not. So lets start there.

Astro is a native email app on Apples two main computing platforms, and it does what most modern email apps ought to do: separates your email into two groups (Priority and Other) and allows you to snooze emails until later. It has a unified inbox, reminders, and a swiping interface for triaging email. Astro will also allow attachments from multiple cloud storage services and let you set notifications just for the priority inbox.

Those features alone put it in the right league too many apps only work on the phone but eschew the desktop, leaving Gmail and Exchange users with a confusing mess of folders.

But what the startup (which has employees who have worked at companies like Zimbra, Asana, LinkedIn, and Google) hopes to do is nail the fundamentals and then add in a chatbot that can use machine learning to fix the sort of things other emails apps try to handle with buttons and other user interface tricks.

Heres the basic idea: instead of crafting your own filters and carefully denoting VIP contacts, Astro will watch how you use your email and its chatbot will try to anticipate your needs. Always snooze emails from the family until the evening? Astro will offer to do it automatically. You can tell it to clear out old emails, or remind you to email your boss tomorrow. It can even scan your contact list and map our who knows who (with some privacy protections) so that when you want to get in touch with somebody, it can see who among your contacts is the best person to make an introduction.

Thats the idea, anyway. The implementation isnt quite as elegant as the theory. Astros bot appears in a separate space from your standard email tasks gently offering suggestions when it has them, eminently ignorable if youre not interested.

It does attempt to call out specific emails that need your attention. If somebody important (like, say, your boss) is asking for something specific (like, say, that spreadsheet you promised) the chatbot will call it out to you. It will take note of emails you reflexively delete and ask if you just want to unsubscribe from the darn things.

Its gotten tiresome to hear about AI and Machine Learning from companies both large and small. Even the big corporations like Google and Microsoft dont always get it right, so its fair to take a skeptical stance when asking if a startup like Astro can pull it off.

An AI for all your work software is a genuinely good idea

But the idea is sound: most modern workplaces have important stuff scattered across multiple software systems: email, calendar, Asana, Trello, FogBugz, SalesForce, Slack, you name it. Increasingly, everybody has to manage a whole bunch of this stuff and find a way to move conversations and ideas from one platform to the next.

So what Astro is promising isnt just a chatbot for email, but an assistant that can eventually handle the cognitive load or making sense of all of those different systems. Its the sort of thing Google is also talking about this week at its Google Cloud Next conference, actually. Whoever figures it out first is going to be a big winner, but nobodys really close at all yet.

Thats why Astro is starting small with the easily mockable idea of a chatbot glommed on to an email client. Mock away, but its a start. And Astro is slightly different from Google, Microsoft, and Apple in this game: its willing to work across multiple software platforms and be the glue between them rather than try to win it all like G Suite, Office, and iCloud. That strategy worked pretty well for Slack.

See the rest here:

Astro aims to fix your email mess with an AI chatbot - The Verge

Beijing Wants AI to Be Made in China by 2030 – New York Times

A.I. is one of a growing number of disciplines in which experts say China is making quick progress.

Yet it was a foreign feat of A.I. prowess that provided one of the greatest impetuses for the new plan.

The two professors who consulted with the government on A.I. both said that the 2016 defeat of Lee Se-dol, a South Korean master of the board game Go, by Googles AlphaGo had a profound impact on politicians in China. Then in May, Google brought AlphaGo to China, where it defeated the worlds top-ranked player, Ke Jie of China. Live video coverage of the event was blocked at the last minute in China.

As a sort of Sputnik moment for China, the professors said, the event paved the way for a new flow of funds into the discipline.

Chinas ambitions with A.I. range from the anodyne to the dystopian, according to the new plan. It calls for support for everything from agriculture and medicine to manufacturing.

Yet it also calls for the technology to work in concert with the countrys homeland security and surveillance efforts. China wants to integrate A.I. into guided missiles, use it to track people on closed-circuit cameras, censor the internet and even predict crimes.

Beijings interest in the technology has set off alarms within the United States defense establishment. The Defense Department found that Chinese money has been flowing into American A.I. companies some of the same ones it says are likely to help the United States military develop future weapons systems.

In a timeline laid out within the new policy, the government expects its companies and research facilities to be at the same level as leading countries like the United States by 2020. Five years later, it calls for breakthroughs in select disciplines within A.I. that will become a key impetus for economic transformation.

In the final stage, by 2030, China will become the worlds premier artificial intelligence innovation center, which in turn will foster a new national leadership and establish the key fundamentals for an economic great power.

While the language in Chinese industrial policy can sound stodgy and the targets overly ambitious, Beijing takes its economic planning seriously. Experts say that even if major spending efforts ultimately waste resources, they can also produce results, bolstering technology capabilities with a flood of resources.

Top-level statements like this also work as a signal to local governments and companies across the country.

The new plan formalizes a focus that was widely known in China. Following those cues, a large number of local governments have created special plans and built out research centers to focus on A.I.

Many are spending hundreds of millions of dollars, but some have earmarked even more. In June, the government of Tianjin, an eastern city near Beijing, said it planned to set up a $5 billion fund to support the A.I. industry. It also set up an intelligence industry zone that will sit on more than 20 square kilometers of land.

The initiative is also likely to sweep up private Chinese companies. The countrys internet search giant Baidu, which has run an A.I. research center out of Silicon Valley in recent years, announced this year that it would open a new lab in cooperation with the government. The two leaders of that lab have worked on Chinese government programs with military applications.

Follow Paul Mozur on Twitter @paulmozur.

Carolyn Zhang contributed research from Shanghai.

A version of this article appears in print on July 21, 2017, on Page B1 of the New York edition with the headline: China Sets Goal to Lead In Artificial Intelligence.

Excerpt from:

Beijing Wants AI to Be Made in China by 2030 - New York Times

AI analyses X-rays as well as doctors – Medical Xpress

July 4, 2017 Max Gordon. Credit: Stefan Zimmerman

Many jobs, medical and otherwise, might one day be performed using artificial intelligence. According to a new study in Acta Orthopaedica by researchers at Karolinska Institutet in collaboration with the Royal Institute of Technology and Danderyd Hospital in Sweden, self-learning programmes can already find fractures with the same accuracy as orthopaedists.

Assessing radiographs requires a great deal of expertise and time, with the results very much depending on the doctor. However, artificial intelligence (AI) can simplify and standardise the work considerably, according to Max Gordon, assistant consultant in orthopaedics at Danderyd Hospital and researcher at Karolinska Institutet in Sweden, who has now published a study on how radiographs can be read using computers trained in fracture recognition.

"Our study shows that AI networks can make assessments on a par with human specialists, and we hope that we'll be able to achieve even better results with high-res X-ray images," says Dr Gordon.

AI-facilitated image analysis had its major breakthrough in 2012, when the algorithm that astounded the computer world was compared to a human three-times worse at recognising objects in pictures from the internet. In only three years, it was at human level and by 2016 it was twice as good. This made Dr Gordon think about how the technique could be used in the fields of orthopaedics and radiograph analysis.

In the present study, the researchers had existing AI image-recognition algorithms go through a total of 256,000 radiographs of hands, wrists and ankles from the Danderyd Hospital archives. The computer was trained how to identify fractures in two thirds of the radiographs under the guidance of the researchers and then was left to independently analyse the remaining images, which were thus completely new to the AI programme. Two consultants simultaneously analysed the same radiographs.

The team found that the computer and the doctors made equally accurate analyses given the same image resolution, both finding the presence of a fracture in over 80 per cent of the cases.

The AI programme, which is inspired by the learning processes of the human brain, has the potential to be even better at its job if it has access to greater amounts of data. The researchers have therefore begun a follow-up study based on Danderyd Hospital's entire orthopaedic archive of over a million high-resolution radiographs.

"AI can lead to a more uniform classification and a common standard in radiograph analysis," says Dr Gordon. "If we can go back to our digital archives, we'll also be able to do extensive research on survival, the development of disease and work capacity studies that have been impossible to do owing to the amount of data to process."

Explore further: Spondylolisthesis linked to spinous process fractures

More information: "Artificial intelligence for analyzing orthopedic trauma radiographs: Deep learning algorithms - are they on par with humans for diagnosing fractures?", Jakub Olczak, Niklas Fahlberg, Atsuto Maki, Ali Sharif Razavian, Anthony Jilert, Andr Stark, Olof Skldenberg, Max Gordon. Acta Orthopaedica, 3 July 2017.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Read this article:

AI analyses X-rays as well as doctors - Medical Xpress

Spending in Artificial Intelligence to accelerate across the public sector due to automation and social distancing compliance needs in response to…

April 9, 2020 - LONDON, UK: Prior to the COVID-19 pandemic, the IDC (International Data Corporation) Worldwide Artificial Intelligence Spending Guide had forecast European artificial intelligence (AI) spending of $10 billion for 2020, and a healthy growth at a 33% CAGR throughout 2023. With the COVID-19 outbreak, IDC expects a variety of changes in spending in 2020. AI solutions deployed in the cloud will experience a strong uptake, showing that companies are looking at deploying intelligence in the cloud to be more efficient and agile.

"Following the COVID-19 outbreak, many industries such as transportation and personal and consumer services will be forced to revise their technology investments downwards," said Andrea Minonne, senior research analyst at IDC Customer Insights & Analysis. "On the other hand, AI is a technology that can play a significant role in helping businesses and societies deal with and solve large scale disruption caused by quarantines and lockdowns. Of all industries, the public sector will experience an acceleration of AI investments. Hospitals are looking at AI to speed up COVID-19 diagnosis and testing and to provide automated remote consultations to patients in self-isolation through chatbots. At the same time, governments will use AI to assess social distancing compliance"

In the IDC report, What is the Impact of COVID-19 on the European IT Market? (IDC #EUR146175020, April 2020) we assessed the impact of COVID-19 across 181 European companies and found that, as of March 23, 16% of European companies believe automation through AI and other emerging technologies can help them minimize the impact of COVID-19. With large scale lockdowns in place, a shortage of workers and supply chain disruptions will drive automation needs across manufacturing.

Applying intelligence to automate processes is a crucial response to the COVID-19 crisis. Not only does automation allow European companies to digitally transform, but also to make prompt data-driven decisions and have a positive impact on business efficiency. IDC expects a surge in adoption of automated COVID-19 diagnosis in healthcare to speed up diagnosis and save time for both doctors and patients. As the virus spreads quickly, labor shortages in industries where product demand is surging can become a critical problem. For that reason, companies are renovating their hiring processes, applying a mix of intelligent automation and virtualization in their hiring processes. Companies will also aim to automate their supply chains, maintain their agility and avoid production bottlenecks, especially for industries with vast supplier networks. With customer service centers becoming severely restricted, automation will be a crucial part for remote customer engagement and chatbots will help customers in self-isolation get the support they need without having to wait a long time.

"As a short-term response to the COVID-19 crisis, AI can play a crucial part in automating processes and limiting human involvement to a necessary minimum," said Petr Vojtisek, research analyst at IDC Customer Insights & Analysis. "In the longer term, we might observe an increase in AI adoption for companies that otherwise wouldn't consider it, both for competitive and practical reasons."

IDC's Worldwide Semiannual Artificial Intelligence Spending Guide provides guidance on the expected technology opportunity around the AI market across nine regions. Segmented by 32 countries, 19 industries, 27 use cases, and 6 technologies, the guide provides IT vendors with insight into this rapidly growing market and how the market will develop over the coming years.

For IDCs European coverage of COVID-19, click here.

Follow this link:

Spending in Artificial Intelligence to accelerate across the public sector due to automation and social distancing compliance needs in response to...

Infosys eyes robotics, AI and driverless cars for next round of growth – Economic Times

NEW DELHI: Infosys CEO Vishal Sikka may have given a glimpse of his firm's future plans as it looks to score big on newer technologies to ramp up revenue.

Sikka arrived for the earnings briefing in a driverless car, completey developed by the firm's engineering services in Mysuru. "Who says we can't build transformative technologies," Sikka tweeted.

"The driverless car is kind of the technology we are strongly focussed on. If you go by our numbers, about 10% of our revenue has come from new technoogies, services that did not exist 2 years ago. These are high growth services and that's where our focus will be," Sikka said.

Sikka said the firm's attempt is to create a pool of thousands of engineers with capability to work on projects in artificial intelligence and tap business opportunities.

Autonomous driving is something every automobile company will get into, and we are trying to build talent around this, Sikka said.

Read the original here:

Infosys eyes robotics, AI and driverless cars for next round of growth - Economic Times

AI streamlines acoustic ID of beluga whales – GCN.com

AI streamlines acoustic ID of beluga whales

Scientists at the National Oceanic and Atmospheric Administration who study endangered beluga whales in Alaskas Cook Inlet used artificial intelligence to reduce the time they spend on analysis by 93%.

Researchers have acoustically monitored beluga whales in the waterway since 2008, but acoustic data analysis is labor-intensive because automated detection tools are relativelyarchaic in our field, Manuel Castellote, a NOAA affiliate scientist, told GCN. By improving the analysis process, we would provide resultssooner, and our research wouldbecome more efficient.

The analysis typically gets hung up in the process of validating the data because detectors pick up any acoustic signal that is similar to that of a beluga whales call or whistle. As a result, researchers get many false detections, including noise from vessel propellers, ice friction and even birds at the surface in shallow areas, Castellote said.

A machine learning model that could distinguish between actual whale calls and other sounds would provide highly accurate validation output and replace the effort of a human analyst going through thousands of detections to validatethe ones corresponding to beluga, he said.

The researchers used Microsoft AI products to develop a model with a deep neural network, a convolutional neural network, a deep residual network, and a densely connected convolutional neural network. The resulting detector that is an ensemble of these four AI models is more accurate than each of the independent models, Castellote said.

Heres how it works: Twice a year, researchers recover acoustic recorders from the seafloor. A semi-automated detector has been extracting the data and processing it, looking for tones in the recordings. It yields thousands sometimes hundreds of thousands of detections per dataset.

The team used the collection of recordings with annotated detections -- both actual beluga calls and false positives -- that it has amassed in the past 12 years to train the AI and ML tools.

Now, instead of having a data analyst sit in front of a computer for seven to 14 days to validate all these detections one by one, the unvalidated detection log is used by the ensemble model to check the recordings and validate all the detections in the log in four to five hours, Castellote said. The validated log is then used to generate plots of beluga seasonalpresence in each monitored location. These results are useful to inform management decisions.

With the significant time theyre saving, researchers can increase the number of recorders they send to the seafloor each season and focus on other aspects of data analysis, such as understanding where belugas feed based on the sounds they make when hunting prey, Castellote said. They can also study human-made noise to identify activity in the area that might harm the whales.

The team is now moving into the second phase of its collaboration with Microsoft, which involves cutting the semi-automated detector out of the process and instead applying ML directly to the sound recordings. The streamlined process will search for signals from raw data, rather than using a detection log to validate pre-detected signals.

This allows widening the detection process from beluga only to all cetaceans inhabiting Cook Inlet, Castellote said. Furthermore, it allows incorporating other target signals to be detected and classified [such as] human-made noise. Once the detection and classification processes are implemented, this approach will allow covering multiple objectives at once in our data analysis.

Castellotes colleague, Erin Moreland, will use AI this spring to monitor other mammals, too, including ice seals and polar bears. A NOAA turboprop airplane outfitted with AI-enabled cameras will fly over the Beaufort Sea scanning and classifying the imagery to produce a population count that will be ready in hours instead of months, according to a Microsoft blog post.

The work is in line with a larger NOAA push for more AI in research. On Feb. 18, the agency finalized the NOAA Artificial Intelligence Strategy. It lists five goals for using AI, including establishing organizational structures and processes to advance AI agencywide, using AI research in support of NOAAs mission and accelerating the transition of AI research to applications.

Castellote said the ensemble deep learning model hes using could easily be applied to other acoustic signal research.

A code module was built to allow retraining the ensemble, he said. Thus, any other project focused on different species (and soon human-made noise) can adapt the machine learningmodel to detect and classify signals of interest in their data.

Specifics about the model are available on GitHub.

About the Author

Stephanie Kanowitz is a freelance writer based in northern Virginia.

Excerpt from:

AI streamlines acoustic ID of beluga whales - GCN.com

Can AI Ever Be as Curious as Humans? – Harvard Business Review

Executive Summary

Curiosity has been hailed as one of the most critical competencies for the modern workplace. As the workplace becomes more and more automated, it begs the question: Can artificial intelligence ever be curious as human beings? AIs desire to learn a directed task cannot be overstated. Most AI problems comprise defining an objective or goal that becomes the computers number one priority.At the same time, AI is also constrained in what it can learn. AI is increasinglybecoming a substitute for tasks that once required a great deal of human curiosity, and when it comes to performance, AI will have an edge over humans in a growing number of tasks. But the capacity to remain capriciously curious about anything, including random things, and pursue ones interest with passion, may remain exclusively human.

Curiosity has been hailed as one of the most critical competencies for the modern workplace. Its been shown to boost peoples employability. Countries with higher curiosity enjoy more economic and political freedom, as well as higher GDPs. It is therefore not surprising that, as future jobs become less predictable, a growing number of organizations will hire individuals based on what they could learn, rather than on what they already know.

Of course, peoples careers are still largely dependent on their academic achievements, which are (at least partly) a result of their curiosity. Since no skill can be learned without a minimum level of interest, curiosity may be considered one of the critical foundations of talent. AsAlbert Einstein famously noted,I have no special talent. I am only passionately curious.

How it will impact business, industry, and society.

Curiosity is only made more important for peoples careers by the growing automation of jobs. At this years World Economic Forum, ManpowerGroup predicted that learnability, the desire to adapt ones skill set to remain employable throughout ones working life, is a key antidote to automation. Those who are more willing and able to upskill and develop new expertise are less likely to be automated. In other words, the wider the range of skills and abilities you acquire, the more relevant you will remain in the workplace. Conversely, if youre focused on optimizing your performance, your job will eventually consist of repetitive and standardized actions that could be better executed by a machine.

But what if AI were capable of being curious?

As a matter of fact, AIs desire to learn a directed task cannot be overstated. Most AI problems comprise defining an objective or goal that becomes the computers number one priority. To appreciate the force of this motivation, just imagine if your desire to learn something ranked highest among all your motivational priorities, above any social status or even your physiological needs. In that sense, AI is way more obsessed with learning than humans are.

At the same time, AI is constrained in what it can learn. Its focus and scope are very narrow compared to that of a human, and its insatiable learning appetite applies only to extrinsic directives learn X, Y, or Z. This is in stark contrast to AIs inability to self-direct or be intrinsically curious. In that sense, artificial curiosity is the exact opposite of human curiosity; people are rarely curious about something because they are told to be. Yet this is arguably the biggest downside to human curiosity: It is free-flowing and capricious, so we cannot boost it at will, either in ourselves or in others.

To some degree, most of the complex tasks that AI has automated have exposed the limited potential of human curiosity vis-a-vis targeted learning. In fact, even if we dont like to describe AI learning in terms of curiosity, it is clear that AI is increasingly a substitute for tasks that once required a great deal of human curiosity. Consider the curiosity that went into automobile safety innovation, for example. Remember automobile crash tests? Thanks to the dramatic increase in computing power, a car crash can now be simulated bya computer. In the past, innovative ideas required curiosity, followed by design and testing in a lab. Today, computers can assist curiosity efforts by searching for design optimizations on their own. With this intelligent design process, the computer owns the entire life cycle of idea creation, testing, and validation. The final designs, if given enough flexibility, can often surpass whats humanly possible.

Similar AI design processes are becoming more common across many different industries. Google has used it to optimize cooling efficiency with itsdata centers. NASA engineers have used it to improve antennae quality for maximum sensitivity. With AI, the process of design-test-feedback can happen in milliseconds instead of weeks. In the future, the tunable design parameters and speed will only increase, thus broadening our possible applications for human-inspired design.

A more familiar example might be the face-to-face interview, since nearly every working adult has had to endure one. Improving the quality of hires is a constant goal for companies, but how do you do it? A human recruiters curiosity could inspire them to vary future interviews by question or duration. In this case, the process for testing new questions and grading criteria is limited by the number of candidates and observations. In some cases, a company may lack the applicant volume to do any meaningful studies to perfect itsinterview process. But machine learning can be applied directly to recorded video interviews, and the learning-feedback process can be tested in seconds. Candidates can be compared based on features related to speech and social behavior. Microcompetencies that matter such as attention, friendliness, and achievement-based language can be tested and validated from video, audio, and language in minutes, while controlling for irrelevant variables and eliminating the effects of unconscious (and conscious) biases. In contrast, human interviewers are often not curious enough to ask candidates important questions or they are curious about the wrong things, so they end up paying attention to irrelevant factors and making unfair decisions.

Lastly, consider a human playing a computer game. Many games start out with repeated trial and error, sohumans must attempt new things and innovate to succeed in the game: If I try this, then what? What if I go here? Early versions of game robots were not very capable because they were using the full game state information; they knew where their human rivals were and what they were doing. But since 2015something new has happened: Computers can beat us on equal grounds, without any game state information, thanks to deep learning. Both humans and the computers can make real-time decisions about their next move. (As an example, see this video of a deep network learning to play the game Super Mario World.)

From the above examples, it may seem that computers have surpassed humans when it comes to specific (task-related) curiosity. It is clear that computers can constantly learn and test ideas faster than we can, so long as they have a clear set of instructions and a clearly defined goal. However, computers still lack the ability to venture into new problem domains and connect analogous problems, perhaps because of their inability to relate unrelated experiences. For instance, the hiring algorithms cant play checkers, and the car design algorithms cant play computer games. In short, when it comes to performance, AI will have an edge over humans in a growing number of tasks, but the capacity to remain capriciously curious about anything, including random things, and pursue ones interest with passion may remain exclusively human.

More:

Can AI Ever Be as Curious as Humans? - Harvard Business Review

Patients aren’t being told about the AI systems advising their care – STAT

Since February of last year, tens of thousands of patients hospitalized at one of Minnesotas largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients has any idea about the AI involved in their care.

Thats because frontline clinicians at M Health Fairview generally dont mention the AI whirring behind the scenes in their conversations with patients.

At a growing number of prominent hospitals and clinics around the country, clinicians are turning to AI-powered decision support tools many of them unproven to help predict whether hospitalized patients are likely to develop complications or deteriorate, whether theyre at risk of readmission, and whether theyre likely to die soon. But these patients and their family members are often not informed about or asked to consent to the use of these tools in their care, a STAT examination has found.

advertisement

The result: Machines that are completely invisible to patients are increasingly guiding decision-making in the clinic.

Hospitals and clinicians are operating under the assumption that you do not disclose, and thats not really something that has been defended or really thought about, Harvard Law School professor Glenn Cohen said. Cohen is the author of one of only a few articles examining the issue, which has received surprisingly scant attention in the medical literature even as research about AI and machine learning proliferates.

advertisement

In some cases, theres little room for harm: Patients may not need to know about an AI system thats nudging their doctor to move up an MRI scan by a day, like the one deployed by M Health Fairview, or to be more thoughtful, such as with algorithms meant to encourage clinicians to broach end-of-life conversations. But in other cases, lack of disclosure means that patients may never know what happened if an AI model makes a faulty recommendation that is part of the reason they are denied needed care or undergo an unnecessary, costly, or even harmful intervention.

Thats a real risk, because some of these AI models are fraught with bias, and even those that have been demonstrated to be accurate largely havent yet been shown to improve patient outcomes. Some hospitals dont share data on how well the systems work, justifying the decision on the grounds that they are not conducting research. But that means that patients are not only being denied information about whether the tools are being used in their care, but also about whether the tools are actually helping them.

The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects, who see little value but plenty of downside in raising the subject.

They worry that bringing up AI will derail clinicians conversations with patients, diverting time and attention away from actionable steps that patients can take to improve their health and quality of life. Doctors also emphasize that they, not the AI, make the decisions about care. An AI systems recommendation, after all, is just one of many factors that clinicians take into account before making a decision about a patients care, and it would be absurd to detail every single guideline, protocol, and data source that gets considered, they say.

Internist Karyn Baum, whos leading M Health Fairviews rollout of the tool, said she doesnt bring up the AI to her patients in the same way that I wouldnt say that the X-ray has decided that youre ready to go home. She said she would never tell a fellow clinician not to mention the model to a patient, but in practice, her colleagues generally dont bring it up either.

Four of the health systems 13 hospitals have now rolled out the hospital discharge planning tool, which was developed by the Silicon Valley AI company Qventus. The model is designed to identify hospitalized patients who are likely to be clinically ready to go home soon and flag steps that might be needed to make that happen, such as scheduling a necessary physical therapy appointment.

Clinicians consult the tool during their daily morning huddle, gathering around a computer to peer at a dashboard of hospitalized patients, estimated discharge dates, and barriers that could prevent that from occurring on schedule. A screenshot of the tool provided by Qventus lists a hypothetical 76-year-old patient, N. Griffin, who is scheduled to leave the hospital on a Tuesday but the tool prompts clinicians to consider that he might be ready to go home Monday, if he can be squeezed in for an MRI scan by Saturday.

Baum said she sees the system as a tool to help me make a better decision just like a screening tool for sepsis, or a CT scan, or a lab value but its not going to take the place of that decision, she said. To her, it doesnt make sense to mention to patients. If she did, Baum said, she could end up in a lengthy discussion with patients curious about how the algorithm was created.

That could take valuable time away from the medical and logistical specifics that Baum prefers to spend time talking about with patients flagged by the Qventus tool. Among the questions she brings up with them: How are the patients vital signs and lab test results looking? Does the patient have a ride home? How about a flight of stairs to climb when they get there, or a plan for getting help if they fall?

Some doctors worry that while well-intentioned, the decision to withhold mention of these AI systems could backfire.

I think that patients will find out that we are using these approaches, in part because people are writing news stories like this one about the fact that people are using them, said Justin Sanders, a palliative care physician at Dana-Farber Cancer Institute and Brigham and Womens Hospital in Boston. It has the potential to become an unnecessary distraction and undermine trust in what were trying to do in ways that are probably avoidable.

Patients themselves are typically excluded from the decision-making process about disclosure. STAT asked four patients who have been hospitalized with serious medical conditions kidney disease, metastatic cancer, and sepsis whether theyd want to be told if an AI-powered decision support tool were used in their care. They expressed a range of views: Three said they wouldnt want to know if their doctor was being advised by such a tool. But a fourth patient spoke out forcefully in favor of disclosure.

This issue of transparency and upfront communication must be insisted upon by patients, said Paul Conway, a 55-year-old policy professional who has been on dialysis and received a kidney transplant, both consequences of managing kidney disease since he was a teenager.

The AI-powered decision support tools being introduced in clinical care are often novel and unproven but does their rollout constitute research?

Many hospitals believe the answer is no, and theyre using that distinction as justification for the decision not to inform patients about the use of these tools in their care. As some health systems see it, these algorithms are tools being deployed as part of routine clinical care to make hospitals more efficient. In their view, patients consent to the use of the algorithms by virtue of being admitted to the hospital.

At UCLA Health, for example, clinicians use a neural network to pinpoint primary care patients at risk of being hospitalized or frequently visiting the emergency room in the next year. Patients are not made aware of the tool because it is considered a part of the health systems quality improvement efforts, according to Mohammed Mahbouba, who spoke to STAT in February when he was UCLA Healths chief data officer. (He has since left the health system.)

This is in the context of clinical operations, Mahbouba said. Its not a research project.

Oregon Health and Science University uses a regression-powered algorithm to monitor the majority of its adult hospital patients for signs of sepsis. The tool is not disclosed to patients because it is considered part of hospital operations.

This is meant for operational care, it is not meant for research. So similar to how youd have a patient aware of the fact that were collecting their vital sign information, its a part of clinical care. Thats why its considered appropriate, said Abhijit Pandit, OHSUs chief technology and data officer.

But there is no clear line that neatly separates medical research from hospital operations or quality control, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison. And researchers and bioethicists often disagree on what constitutes one or the other.

This has been a huge issue: Where is that line between quality control, operational control, and research? Theres no widespread agreement, Ossorio said.

To be sure, there are plenty of contexts in which hospitals deploying AI-powered decision support tools are getting patients explicit consent to use them. Some do so in the context of clinical trials, while others ask permission as part of routine clinical operations.

At Parkland Hospital in Dallas, where the orthopedics department has a tool designed to predict whether a patient will die in the next 48 hours, clinicians inform patients about the tool and ask them to sign onto its use.

Based on the agreement we have, we have to have patient consent explaining why were using this, how were using it, how well use it to connect them to the right services, etc., said Vikas Chowdhry, the chief analytics and information officer for a nonprofit innovation center incubated out of Parkland Health System in Dallas.

Hospitals often navigate those decisions internally, since manufacturers of AI systems sold to hospitals and clinics generally dont make recommendations to their customers about what, if anything, frontline clinicians should say to patients.

Jvion a Georgia-based health care AI company that markets a tool that assesses readmission risk in hospitalized patients and suggests interventions to prevent another hospital stay encourages the handful of hospitals deploying its model to exercise their own discretion about whether and how to discuss it with patients. But in practice, the AI system usually doesnt get brought up in these conversations, according to John Frownfelter, a physician who serves as Jvions chief medical information officer.

Since the judgment is left in the hands of the clinicians, its almost irrelevant, Frownfelter said.

When patients are given an unproven drug, the protocol is straightforward: They must explicitly consent to enroll in a clinical study authorized by the Food and Drug Administration and monitored by an institutional review board. And a researcher must inform them about the potential risks and benefits of taking the medication.

Thats not how it works with AI systems being used for decision support in the clinic. These tools arent treatments or fully automated diagnostic tools. They also dont directly determine what kind of therapy a patient may receive all of which would make them subject to more stringent regulatory oversight.

Developers of AI-powered decision support tools generally dont seek approval from the FDA, in part because the 21st Century Cures Act, which was signed into law in 2016, was interpreted as taking most medical advisory tools out of the FDAs jurisdiction. (That could change: In guidelines released last fall, the agency said it intends to focus its oversight powers on AI decision-support products meant to guide treatment of serious or critical conditions, but whose rationale cannot be independently evaluated by doctors a definition that lines up with many of the AI models that patients arent being informed about.)

The result, for now, is that disclosure around AI-powered decision support tools falls into a regulatory gray zone and that means the hospitals rolling them out often lack incentive to seek informed consent from patients.

A lot of people justifiably think there are many quality-control activities that health care systems should be doing that involve gathering data, Wisconsins Ossorio said. And they say it would be burdensome and confusing to patients to get consent for every one of those activities that touch on their data.

In contrast to the AI-powered decision support tools, there are a few commonly used algorithms subject to the regulation laid out by the Cures Act, such as the type behind the genetic tests that clinicians use to chart a course of treatment for a cancer patient. But in those cases, the genetic test is extremely influential in determining what kind of therapy or drug a patient may receive. Conversely, theres no similarly clear link between an algorithm designed to predict whether a patient may be readmitted to the hospital and the way theyll be treated if and when that occurs.

If it were me, Id say just file for institutional review board approval and either get consent or justify why you could waive it.

Pilar Ossorio, professor of law and bioethics, University of Wisconsin-Madison

Still, Ossorio would support an ultra-cautious approach: I do think people throw a lot of things into the operations bucket, and if it were me, Id say just file for institutional review board approval and either get consent or justify why you could waive it.

Further complicating matters is the lack of publicly disclosed data showing whether and how well some of the algorithms work, as well as their overall impact on patients. The public doesnt know whether OHSUs sepsis-prediction algorithm actually predicts sepsis, nor whether UCLAs admissions tool actually predicts admissions.

Some AI-powered decision support tools are supported by early data presented at conferences and published in journals, and several developers say theyre in the process of sharing results: Jvion, for example, has submitted to a journal for publication a study that showed a 26% reduction in readmissions when its readmissions risk tool was deployed; that paper is currently in review, according to Jvions Frownfelter.

But asked by STAT for data on their tools impact on patient care, several hospital executives declined or said they hadnt completed their evaluations.

A spokesperson from UCLA said it had yet to complete an assessment of the performance of its admissions algorithm.

Before you use a tool to do medical decision-making, you should do the research.

Pilar Ossorio, professor of law and bioethics, University of Wisconsin-Madison

A spokesperson from OHSU said that according to its latest report, run before the Covid-19 pandemic began in March, its sepsis algorithm had been used on 18,000 patients, of which it had flagged 1,659 patients as at-risk with nurses indicating concern for 210 of them. He added that the tools impact on patients as measured by hospital death rates and length of time spent in the facility was inconclusive.

Its disturbing that theyre deploying these tools without having the kind of information that they should have, said Wisconsins Ossorio. Before you use a tool to do medical decision-making, you should do the research.

Ossorio said it may be the case that these tools are merely being used as an additional data point and not to make decisions. But if health systems dont disclose data showing how the tools are being used, theres no way to know how heavily clinicians may be leaning on them.

They always say these tools are meant to be used in combination with clinical data and its up to the clinician to make the final decision. But what happens if we learn the algorithm is relied upon over and above all other kinds of information? she said.

There are countless advocacy groups representing a wide range of patients, but no organization exists to speak for those whove unknowingly had AI systems involved in their care. They have no way, after all, of even identifying themselves as part of a common community.

STAT was unable to identify any patients who learned after the fact that their care had been guided by an undisclosed AI model, but asked several patients how theyd feel, hypothetically, about an AI system being used in their care without their knowledge.

Conway, the patient with kidney disease, maintained that he would want to know. He also dismissed the concern raised by some physicians that mentioning AI would derail a conversation. Woe to the professional that as you introduce a topic, a patient might actually ask questions and you have to answer them, he said.

Other patients, however, said that while they welcomed the use of AI and other innovations in their care, they wouldnt expect or even want their doctor to mention it. They likened it to not wanting to be privy to numbers around their prognosis, such as how much time they might expect to have left, or how many patients with their disease are still alive after five years.

Any of those statistics or algorithms are not going to change how you confront your disease so why burden yourself with them, is my philosophy, said Stacy Hurt, a patient advocate from Pittsburgh who received a diagnosis of metastatic colorectal cancer in 2014, on her 44th birthday, when she was working as an executive at a pharmaceutical company. (She is now doing well and is approaching five years with no evidence of disease.)

Katy Grainger, who lost the lower half of both legs and seven fingertips to sepsis, said she would have supported her care team using an algorithm like OHSUs sepsis model, so long as her clinicians didnt rely on it too heavily. She said she also would not have wanted to be informed that the tool was being used.

I dont monitor how doctors do their jobs. I just trust that theyre doing it well.

Katy Grainger, patient who developed sepsis

I dont monitor how doctors do their jobs. I just trust that theyre doing it well, she said. I have to believe that Im not a doctor and I cant control what they do.

Still, Grainger expressed some reservations about the tool, including the idea that it may have failed to identify her. At 52, Grainger was healthy and fairly young when she developed sepsis. She had been sick for days and visited an urgent care clinic, which gave her antibiotics for what they thought was a basic bacterial infection, but which quickly progressed to a serious case of sepsis.

I would be worried that [the algorithm] could have missed me. I was young well, 52 healthy, in some of the best shape of my life, eating really well, and then boom, Grainger said.

Dana Deighton, a marketing professional from Virginia, suspects that if an algorithm scanned her data back in 2013, it would have made a dire prediction about her life expectancy: She had just been diagnosed with metastatic esophageal cancer at age 43, after all. But she probably wouldnt have wanted to hear about an AIs forecast at such a tender and sensitive time.

If a physician brought up AI when you are looking for a warmer, more personal touch, it might actually have the opposite and worse effect, Deighton said. (Shes doing well now her scans have turned up no evidence of disease since 2015.)

Harvards Cohen said he wants to see hospital systems, clinicians, and AI manufacturers come together for a thoughtful discussion around whether they should be disclosing the use of these tools to patients and if were not doing that, then the question is why arent we telling them about this when we tell them about a lot of other things, he said.

Cohen said he worries that uptake and trust in AI and machine learning could plummet if patients were to find out, after the fact, that theres a rash of this being used without anyone ever telling them.

Thats a scary thing, he said, if you think this is the way the future is going to go.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

See original here:

Patients aren't being told about the AI systems advising their care - STAT

Juniper’s AI-Powered SD-WAN And How It’s ‘Putting Its Money Where Its Mouth Is’ With Mist – CRN: Technology news for channel partners and solution…

Juniper Networks, a networking giant that has historically counted service providers as its biggest customer segment, has been making an enterprise push over the course of the last year. But then, the COVID-19 pandemic hit, which affected IT spending and changed networking needs dramatically and for the long-term.

Juniper on Thursday released its Q2 2020 financials and while revenue declined 1 percent year-over-year, which the vendor largely attributed to supply restraints related to the COVID-19 pandemic, Junipers software orders grew 7 percent compared to the same quarter last year. The Sunnyvale, Calif.-based firm attributed that growth to a combination of strong security and subscriptions to Mist Systems technology. Juniper acquired Mist Systems in August 2019 and the addition of the startups AI technology couldnt have come at a better time as enterprises grapple with an barrage of home workers and changing campus networking needs, Jeff Aaron, vice president of enterprise marketing for Juniper Networks, told CRN.

Juniper this week also introduced its new WAN Assurance Service and a virtual network assistant (VNA) conversational interface powered by Marvis, Mists VNA technology. The enhancements make up the fourth generation of Junipers AI-Driven Enterprise strategy, which revolves around automating and optimizing IT management across wired, wireless and the WAN, and enhancing end user satisfaction. Aaron talked about Junipers enterprise push and its latest offerings, Junipers approach to SD-WAN compared to the competition, and how the networking giants AI strategy will be key in helping enterprises address post-COVID-19 networking challenges.

Here are excerpts from the conversation.

How critical has the Mist acquisition been in helping enterprises adjust to new networking and IT requirements that came about due to COVID-19?

Last quarter, there was a reorganization at Juniper where we created what we called an AI-Driven Enterprise business unit with marketing, sales, products and engineering, led by Sujai Hajela, who was the CEO of Mist Systems. Under that business unit is now wired and wireless access, and WAN. You can see that the Mist part of the equation has become very core to Juniper's enterprise strategy. Campus, branch, and edge has now moved under the Mist umbrella. Juniper is basically putting its money where its mouth is and aligning all the product portfolios under that team to take that vision forward.

Nothing changes with the partner program. Weve been tightly aligning Mist partners with the Juniper partners and making it very easy for both to become partners of each vendor -- its basically a click that says; would you like to add Mist to your portfolio? The channel program for Mist has grown by a triple-digit percentage since combining the partner programs and weve been really excited about that.

Has the COVID-19 pandemic changed Juniper's networking strategy?

Weve been [talking about] our vision since the Mist acquisition last year. Weve always had planned to go from wireless, to wired, to security, WAN, to data center -- thats not new. But what is new is that we are seeing new COVID-19-led use cases. One is what we are calling the AI-Driven Enterprise at Home. The ability to say that we have low-cost [access points], for example, that are being announced soon, that can be shipped to your home. We can combine that with a firewall that weve offered for a while, and then youll get all this management so its easier to manage your employees at home.

The other area were seeing a strong uptick in is when people do go back to work, we have specific contact tracing [technology] we are selling and its doing really well. We just won a major university on the East coast that spent over $5 million to rip out their existing wireless to get the contact tracing. When people come back on campus, we have proximity tracing. If someone identifies themselves as someone who is COVID-positive or symptomatic, we can use our BLE technology to show everyone they came in contact with without having to use an app -- just based on BLE signals or a BLE badge. We can look on the floorplan to see where they spent their time to see where they should send a cleaning crew -- thats called journey-mapping. We can also do real-time congestion alerting. If there are too many people in a conference room, we can alert IT and facilities to send someone to do social distancing so you dont have to hire a boatload of security guards. Were getting really strong traction because our BLE solution is the only one in the industry that is integrated with our Wi-Fi that doesnt require battery-powered beacons.

How important is the addition of location services and the contact tracing feature to Juniper's enterprise portfolio?

Last week, we announced a partnership with ServiceNow. We can detect all the things I mentioned [with contact tracing], and we have an API with ServiceNow where they can flow that information into their system for case management. Once a case is entered, they can pull in all our user data for managers, HR, or facilities staff -- even the employees themselves. Thats two industry leaders coming together with two different contact tracing solutions based on what customers are asking for, weve automated integration.

As much as I hate to say it, despite campus spending being down and showing negative growth, in general, we are seeing strong demand for what we are doing. Mist had double-digit growth last quarter and part of that was due to contact tracing. The whole beauty of location services -- and contact tracing is one of them -- is that it takes an ecosystem. You need mapping software, apps, and a whole bunch of different things. Its up to our partners to put those together. From Day One, location services have always been very interesting to partners.

How does Juniper's SD-WAN approach differentiate in the crowded market that is seeing a lot of consolidation?

SD-WAN is still a very static and reactive model. You have to create a static policy on what you want to do and how applications should react, and to be honest, SD-WAN is not new, but the market is still only one-third penetrated. The reason is that its very complicated to deploy and its still very much focused on network and application policies. What we want to deliver is the AI-driven WAN that is true self-diving and adapts in real-time. Its focused on user experiences. Just because the link is up and passing traffic, doesnt mean youre having a good experience. That was always our philosophy in wireless -- Up is not the same as good. Weve now taken that same philosophy on the wireless and added it to the WAN.

We are focused on the user experience, combining WAN automation with wire, wireless, and security, and its all built on a modern cloud -- most SD-WAN solutions are 10-15 years old and are not -- and ours has true AI support with [Mists virtual network assistant technology] Marvis. Every trouble ticket is passed through our AI engine, so it learns and keeps getting better. No one in the industry is doing that for WAN. Aruba is cutting and pasting these things on their slides now that they bought Silver Peak, but its going to be years to come close to this and their AI doesnt have near the capabilities that ours has to do this.

How is Juniper's enterprise push progressing during the recent economic downturn that has impacted IT spending?

The key takeaway is the market itself for enterprise, campus, and branch has declined -- it has seen negative growth -- whereas our enterprise revenue increased along with campus and branch [revenue] last quarter. We also had a record quarter for Mist in terms of logos. In the midst of a pandemic, we are seeing record numbers, which benefits us as well as our partners. Its really due to the fact that now more than ever, people need cloud and AI because they cant get on-site. Now, theres more branch office sites and its tougher to troubleshoot when people are at home. Its all really driving more and more towards the Mist model, and for that weve been extremely fortunate.

Continue reading here:

Juniper's AI-Powered SD-WAN And How It's 'Putting Its Money Where Its Mouth Is' With Mist - CRN: Technology news for channel partners and solution...

Will AI-as-a-Service Be the Next Evolution of AI? – Madison.com

With all the excitement surrounding the advent of artificial intelligence (AI), there are still a great many things we don't know. Could it lead to the frightening futures depicted in films like Ex Machina, Terminator, and 2001: A Space Odyssey, or might we see less threatening iterations like Data on Star Trek: The Next Generation, Samantha in Her, or TARS from Interstellar?

The current reality of AI is much less cinematic -- it possesses the learned ability to sift through reams of data in short order and recognize patterns. This has led to breakthroughs in the areas of image recognition, language translation, and beating humans at the age-old game of Go. Some of the biggest advances are ongoing in the areas of medical imaging, cancer research, and self-driving cars.

Still, with plenty of developments thus far, it's hard to know what will be the next groundbreaking application of the technology.

Will AI-as-a-Service be the next killer app? Image source: Getty Images.

Small Canadian start-up Element AI believes it has the answer: It wants to democratize AI by offering "AI-as-a-Service" to businesses that can't afford to develop the systems themselves. Tech giants Microsoft Corp. (NASDAQ: MSFT), Intel Corp. (NASDAQ: INTC), and NVIDIA Corp. (NASDAQ: NVDA) believe that Element is on the right track and have invested millions to back up that belief.

Currently, AI requires massive quantities of data in order to train the system. Element AI wants to improve on this by reducing the size of the data sets required, which would make the technology accessible to a wider range of businesses, not just those with massive budgets. Element is improving on the AI concept of leveraging. By using a previously trained system and then introducing smaller data sets, the system applies what it learned previously to the new sets of data.

Element is currently working on a consulting basis with a very small group of large companies that want to leverage AI without developing the systems in-house. In this way, the company can strategically choose its initial customers and train its systems on the larger data sets, which it will later leverage for smaller clients.

The major players investing in AI have primarily been applying the tech to augment their principal businesses. Microsoft has used the technology to improve its Bing search and to power its Cortana virtual assistant, and has built AI into its Azure cloud computing services. Intel has been working to develop an AI-based CPU and has made numerous acquisitions in the field, hoping to get a leg up.

NVIDIA is the only one to date that has been able to quantify the value of AI to its business, as its GPUs have been used to accelerate the training of AI systems. In its most recent quarter, NVIDIA saw revenue of $1.9 billion, which grew 48% year over year, on the back of a 186% increase in its AI-centric data-center revenue.

Element is providing a novel approach to the AI trend. Image source: Pixabay.

Still, none has emerged as a pure play, selling AI-as-a-Service. Element hopes to change that by being the first company of its kind to provide predictive modeling, recommendations systems, and consumer engagement optimization, available to any business, without them having to start their AI efforts from scratch. Providing access to experts in the field that can analyze a business and determine how best to apply AI to solve specific problems will prove beneficial to a wide range of companies without their own AI resources. By filling that void, Element AI hopes to make its mark.

International Business Machines Corp. (NYSE: IBM) provides the closest example, pivoting from its legacy hardware and consulting businesses to selling cloud and cognitive computing solutions via its AI-based Watson supercomputer. Thus far, these newer growth technologies haven't been able to compensate for the shortfall in its legacy business, though the company is applying AI to a wide variety of business processes and has assembled an impressive array of big-name partners. By casting its net into cybersecurity, tax preparation, and a variety of healthcare-related applications, IBM hopes to capitalize on this emerging trend.

It is still early days in AI research and technology, and how the future plays out is yet to be determined. Element AI is taking a unique approach -- and the backing of these three godfathers of tech shows that it might be on the right track.

10 stocks we like better than IBM

When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and IBM wasn't one of them! That's right -- they think these 10 stocks are even better buys.

*Stock Advisor returns as of June 5, 2017

Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors; LinkedIn is owned by Microsoft. Danny Vena has the following options: long January 2018 $25 calls on Intel. The Motley Fool owns shares of and recommends Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.

Read more from the original source:

Will AI-as-a-Service Be the Next Evolution of AI? - Madison.com

How AI is Changing the Way We Invest – Techzone360

Artificial intelligence is rapidly evolving. Unprecedented advances in machine and deep learning have even called for some concern. Elon Musk, futurist billionaire and CEO of SpaceX and Tesla Motors, has dubbed it mankinds greatest existential threat. Indeed, driverless cars, a technology Musk himself is developing, would displace up to 15 percent of the worlds workers a figure the Tesla CEO provided himself. The world of finance is by no means immune to the disruption AI will cause. In fact, artificial intelligence is already changing the way we invest.

According to Investopedia, algorithmic trading already comprises 70 percent of daily trading. As trading becomes more automated, the need for human analysts has sharply decreased. Traders are already being replaced by AI and, as each day goes by, the technology only grows more sophisticated. Quantitative analysis, a strategy that involves crunching numbers and analyzing data, is a task much better suited for advanced software systems as they are much less prone to error and have the ability to absorb a greater amount of data at a much faster speed.

When AI traders do make mistakes, they are able to learn from them at an exceptionally fast rate. What takes traders months to learn, an artificial intelligence program can learn in mere moments. Additionally, quantitative analysis, when performed by machines, is not marred by emotional or wishful thinking. It relies purely on data.

Some hedge funds are taking the technology a step further by allowing intelligent machines to make their managerial decisions. Almost alarmingly, AI-led hedge funds have been consistently outperforming firms led by humans. Humans have bias and sensitivities, conscious and unconscious, Babak Hodjat, co-founder of Sentient Technologies, an AI company aimed at improving various sectors with smart software, told Bloomberg. It's well documented we humans make mistakes. For me, it's scarier to be relying on those human-based intuitions and justifications than relying on purely what the data and statistics are telling you, he continued.

Babaks claim that the cold logic of computers will supersede the hapless decision-making processes of humans is not unfounded. On average, AI funds have experienced annual returns of approximately 8.44 percent, which is significantly higher than many other indices. To put this number in context, Eurekahedge hedge fund index indicates an annual return of 2.29 percent. There are many factors that shape this figure, however, which may be unrelated to AIs superior reasoning capabilities. It is possible, for example, that we are inundated with quantitative analysts (as there has been an influx of funds being poured into quantitative investing strategies as of late), and this surge has caused a marked dip in quality. Still, AIs outperforming traditional quantitative firms cannot be ignored.

AI could affect more than the firms themselves. The proliferation of robo-advisors has the potential to vastly reduce the fees associated with consulting an advisor. Charles Schwab recently launched the Schwab Intelligent Portfolios, which provides investors with the ability to get portfolio recommendations from a few hundred lines of code. Instead of consulting a professional, customers rely on an algorithm to create a portfolio tailored to their level of aversion to risk and their long-term investment goals. Instead of employing a stock broker to carefully curate your portfolio, customers can utilize intelligent systems and software to accomplish their goals. Still, many are leery of entrusting such an important service to a program. Even more remain concerned about robo-advisors eliminating more jobs which would result in a further displacement of financial professionals.

Furthermore, although data points to artificial intelligence being far more efficient and effective as advisors, traders and financial decision-makers, investors are still hesitant to leave important decisions entirely up to the discretion of an AI system. Even the most technical skills, such as financial modeling, demand a great deal of human intuition to be done expertly.

Fundamental investors and followers of Warren Buffetts investing philosophy may still believe they have the upper hand. Much of the information and data that humans try to process when thinking about markets is largely meaningless when applied to the fortunes of individual companies, Miles Johnson writes in his piece about AI and finance for Financial Times. Computers will have an edge in processing large amounts of economic data, but may struggle with the more qualitative judgments Mr Buffett has excelled in such as judging the character of a chief executive or the durability of a brand.

Despite our manifold fears and reservations, artificial intelligence is already reshaping finance. Trading is largely automated. Portfolios can now be generated by programs. Computers have the ability to supplant and surpass hedge fund managers at their own game. It is no longer a question of whether or not AI will change investing. It seems fairly obvious to even the most casual of observers that AI will dominate financial markets if the current trend of rapid advancement continues. Rather, we are now faced with the question of how we plan to integrate humans in the process. AI certainly has the capability of phasing out stock brokers and financial analysts, but it also has the ability to bolster the existing skills of humans, if we are willing to learn how to interact with the powerful technology.

About the Author

Paul Sciglar is a journalist interested in international policies and economic affairs. He is also a certified accountant with broad experience in strategic analysis, FP&A, investment banking, and investment management. You may connect with him on Twitter.

Edited by Alicia Young

Continued here:

How AI is Changing the Way We Invest - Techzone360

AIOps uses AI, automation to boost security – MIT Technology Review

Siemens USA, a manufacturer of industrial and health-care equipment, uses AIOps through its endpoint detection and response system that incorporates machine learning, the subset of AI that enables systems to learn and improve. The system gathers data from endpointshardware devices such as laptops and PCsand then analyzes the data to reveal potential threats. The organizations overall cybersecurity approach also uses data analytics, which allows it to quickly and efficiently parse through numerous log sources. The technology provides our security analysts with actionable outputs and enables us to remain current with threats and indicators of compromise, Mahmood says.

AIOps is a broad category of tools and components that uses AI and analytics to automate common IT operational processes, detect and resolve problems, and prevent costly outages. Machine-learning algorithms monitor across systems, learning as they go how systems perform, and detect problems and anomalies. Now, as adoption of AIOps platforms gains momentum, industry observers say IT decision-makers will increasingly use the technology to bolster cybersecuritylike Siemens, in integration with other security toolsand guard against a multitude of threats. This is happening against a backdrop of mounting complexity in organizations application environments, spanning public and private cloud deployments, and their perennial need to scale up or down in response to business demand. Further, the massive migration of employees to their home offices in an effort to curb the deadly pandemic amounts to an exponential increase in the number of edge-computing devices, all which require protection.

A May report from Global Industry Analysts predicts the AIOps platform market worldwide will grow by an estimated $18 billion this year, driven by a compounded growth rate of 37%.1 It also projects that AIOps initiativesparticularly among big corporationswill span the entire corporate ecosystem, from on-premises to public, private, and hybrid clouds to the network edge, where resources and IT staff are scarce. Most recently, a well-documented rise in data breaches, particularly during the pandemic, has underscored the need to deliver strong, embedded security with AIOps platforms.

Cybersecurity affects every aspect of business and IT operations. The sheer number of near-daily breaches makes it difficultif not impossiblefor organizations, IT departments, and security professionals to cope. In the last year, 43% of companies worldwide reported multiple successful or attempted data breaches, according to an October 2019 survey conducted by KnowBe4, a security awareness training company.2 Nearly two-thirds of respondents worry their organizations may fall victim to a targeted attack in the next 12 months, and today concern is further fueled by the growing number of cybercrimes amid disarray caused by the pandemic. Organizations need to use every technological means at their disposal to thwart hackers.

The strongest AIOps platforms can help organizations proactively identify, isolate, and respond to security issues, helping teams assess the relative impact on the business. They can determine, for example, whether a potential problem is ransomware, which infiltrates computer systems and shuts down access to critical data. Or they can ferret out threats with longer-term effects, such as leaking customer data and in turn causing massive reputational damage. Thats because AIOps platforms have full visibility into an organizations data, spanning traditional departmental silos. They apply analytics and AI to the data to determine the typical behavior of an organizations systems. Once they have that baseline state, the platforms do continual reassessments of the networkand all wired and wireless devices communicating on itand zero in on outlier signals. If theyre suspiciousexceeding a threshold defined by AIan alert is sent to IT security staffers detailing the threat, the degree to which it could disrupt the business, and the steps they need to take to eliminate it.

Download the full report.

Read more from the original source:

AIOps uses AI, automation to boost security - MIT Technology Review

Blueshift’s AI helps platform focus on individuals and continuous journeys – MarTech Today

Personalization platform Blueshift is today launching AI-powered customer journeys that move its targeting from user segments to individuals, and its focus from single campaign responses to continuous customer journeys.

Blueshift provides personalized marketing through content recommendations, email marketing, and, for mobile devices, push notifications and SMS.

The companys AI has previously been employed to provide capabilities like Predictive Scores for evaluating such things as which customers are likely to bolt, or to make the most appropriate product or content recommendations to site visitors. The Score might look at data showing, for instance, that certain telco customers are rarely using their data services.

Now, the AI is being used to continually optimize customer journeys. While the Predictive Scores were previously a point-in-time, resulting in a specific campaign effort to a group of users, like sending a discount offer via email, now the scores are continually read so that users can be placed into a customer journey as soon as the individual Score exceeds a threshold.

The AI determines at what point in a continuous series of marketing responses the customer journey to place the particular individual. A journey can also be triggered by a specific event or user behavior.

Co-founder and CEO Vijay Chittoor told me the big takeaway is that marketers plan customer journeys, but the solutions have [largely] been manual, such as when to start customers on a specific journey. Now, he says, AI is helping Blueshift automatically place a customer on the journey as soon as predictive scoring shows a flag.

The platforms AI is also being summoned so that A/B testing of content recommendations can look at recommendation logic. While there was A/B testing of content recommendations before, Chittoor said, it wasnt tuned to determine if, say, recommendation logic based on previous content you chose was better than logic based on recommending content because ofwhat others like you liked.

Blueshift is also adding an ability to determine which step in a journey had the biggest impact, compared to a prior ability to only evaluate an entire journey. Chittoor said that, although AI is not powering this enhancement, AI can be used to optimize the journey once this step-by-step attribution is completed.

Heres Blueshifts visualization of these enhancements:

Read the original here:

Blueshift's AI helps platform focus on individuals and continuous journeys - MarTech Today