3 common jobs AI will augment or displace – VentureBeat

Its clear artificial intelligence (AI) and automation will dramatically affect the job market, but theres conflicting ideas on just how soon this will happen. Some believe its imminent possibly fueled by developmentslike the Japanese insurance company replacing over 30 employees with robots but its not that cut and dried. Many of the jobs that will be automated are the same jobs companies have been outsourcing for years: customer support, data entry, accounting, etc. Others are jobs they simply cannot fill due to decreases in headcount.

Either way, as transactions and expectations for real-time output increase, businesses are struggling to meet this demand and must digitize their operations to remain competitive. Its the future of human labor. Its not black and white, or good and evil, its simply the natural cycle of automation, just like we saw in the industrial revolution and will see again after AI becomes commonplace.

Adoption of AI and automation will be highest in regulated industries and those that must process thousands of transactions and customer requests daily. They are industries like banking, financial services, insurance, and health care those with repetitiveprocesses like copying and pasting that do not really require human intelligence. Its the types of jobs/task within an organization that are repeatable and admin-heavy that will be automated first. In fact, in three examples in particular, were already seeing automation play a big role.

The cost of fraudulent claims across all lines of insurance amounts to $80 billion a year, and well over half of insurers predict an increase in such fraud. Yet, despite the well-known pressures insurers face to correctly verify claims, they also get a bad rap for not doing so fast enough. Thats why the insurance industry is looking to advances in AI to both reduce fraudulent claims and improve customer service by speeding up the process.

Using machine learning, a subfield of AI, insurers can auto-validate policies, matching key facts from the claim to the policy and using cognitive analysis to determine whether the claim should be paid. These technologies can even transmit data into the system for downstream payment automatically and in a fraction of the time it would take a human to complete the same task. Humans are then elevated to tasks that really require their human intelligence and their customer service expertise.

Consumers have become accustomed to talking to bots, whether its asking Siri to find the closest dry cleaner or asking Amazons Alexa to add bananas to the grocery list. And the financial services and banking industries are no exception. More banks are reducing manual service efforts by offloading repetitive inquiries to AI-powered chatbots.

Theyre training these bots on historical conversations so they can perform the same tasks as a human agent, conversing with customers to determine their needs and then, in the best scenarios, actually executing a business process to deliver against their intent. More complex conversations are escalated to a human agent, where they now have the time to handle with care; meanwhile,the chatbots are working in the background to learn from the outcome. Customers are happy because their needs are taken care of seamlessly and quickly, and banks are able to reduce the backlog of customer service requests.

The ultimate goal of every health care plan administrator is to ensure claims are received and processed accurately and on time. The sheer volume of claims makes this a difficult task. Making it harder, claims are submitted in various formats fax, email, handwritten, etc. and must be put in a standardized format before theyre processed. In fact, billing and insurance-related paperwork costs an estimated $375 billion annually.

Using advanced AI/machine learning technologies, health care providers can reduce the amount of time it takes to process a claim and respond to patients and providers. Not only does this improve patient satisfaction, it also lessens errors that can result in hefty financial losses and regulatory fines. While it wont replace all health care administrators, it will help them redirect their resources toward critical, customer-facing activities.

According to a recent report by McKinsey Global Institute, almost every job has the potential to be automated, but more often than not, these jobs will require a combination of automation and human intelligence. There will be a tsunami of job loss relating tocertain tasks but this will push people into higher value work. Data entry may be automated, but creative thinking wont be replaced by bots. AI is creating new efficiencies that will ultimately change the types of jobs that are in demand. This reality is happening more quicklyin some industries than others, but it is unequivocally transforming the way work gets done.

Read more from the original source:

3 common jobs AI will augment or displace - VentureBeat

China Wants to Lead the World on AI. What Does That Mean for America? – The National Interest

Years ago, the thought of using software to fight a deadly pathogen might have seemed far-fetched. Today, its a reality. The Coronavirus pandemic has caused monumental shifts in the use and deployment of artificial intelligence (AI) around the world.

Of those now using AI to fight Coronavirus, none are more prominent than China. From software that diagnoses the symptoms of Coronavirus to algorithms that identify and compile data on individuals with high temperatures vis--vis infrared cameras, China is showcasing the potential applications of AI. But Beijing is also demonstrating its willingness to leverage the technology to solve many of its problems.

To understand the potential benefits and perils, we need to delve a bit deeper into the subject of AI itself. Artificial intelligence essentially falls into two categories: narrow and general. Narrow AI is a type of machine learning that is limited to specifically defined tasks, while general AI refers to totally autonomous intelligence akin to human cognition. General AI remains a distant dream for many, but the real-world implications of narrow AI exist in the presentand China is working diligently to become a world leader in it.

In his book AI Superpowers: China, Silicon Valley, and the New World Order, former Microsoft executive and Google China president Kai-Fu Lee describes how the country began rapid development of AI as a response to AlphaGo, a software program that successfully bested the worlds top player in the ancient game of Go back in 2017. That victory, Lee explains, showcased China's Communist Party (CCP) research and technology with infinite potential.

The revelation was a sea-change. In its 2019 Annual Report, the U.S.-China Economic and Security Review Commission noted that the Next Generation AI Development Plan released in 2017 by Chinas State Council marked a shift in Chinas approach to AI, from pursuing specific applications to prioritizing AI as foundational to overall economic competitiveness.

The results have been rapidand pronounced. China is still considered to be second in the race to AI (behind the U.S.), but it is quickly gaining traction. As the United Nations World Intellectual Property Organization (WIPO) noted last year, China leads in AI-related publications and patent applications originating from public research institutions, and the gap is shrinking between the U.S. and China in patent requests originating from the private sector.

And because the aggregation of vast swathes of data is what drives the most effective artificial intelligence, China is in a unique position to persevere. With the worlds largest population and close to no data privacy protections, the PRC has the potential to develop the worlds best AI products.

Beijing is also working hard to maintain its freedom of action in this domain. Back in March, China triedand nearly succeededin installing its candidate as head of the WIPO, a move that would essentially have assured that its lengthy track record of violating intellectual property rights, theft and espionage would not come with any consequences.

Those practices are already raising international hackles. In April of 2020, Bloomberg reported that electric carmaker Tesla is now seeking further legal action to analyze the source code of a competitors product in China after a former Tesla employee allegedly left the company in 2018 for the Chinese startup, carrying with him secrets from Teslas self-driving AI, AutoPilot.

But the CCP is also harnessing AI to strengthen its authoritarian state. Against the backdrop of the coronavirus pandemic, the Chinese government has stepped up its repressive domestic practices, including its persecution and detention of Uyghur Muslims in Western China and a broad crackdown on Hong Kong. Worryingly, Chinese advances in AI seem to be empowering these practices, as well as making them more effective.

These dynamics should matter a great deal to the United States, which has stepped up its strategic competition with China in earnest in recent months. Chinas activism on the AI front, and its attention to this emerging technology, has made abundantly clear that the PRC places tremendous value on dominating the field of AI. Washington should think deeply about what that would mean, in both a political and a technological sense. And then it should get just as serious in this sphere as well.

Ryan Christensen is a researcher at the American Foreign Policy Council in Washington, DC.

See the original post:

China Wants to Lead the World on AI. What Does That Mean for America? - The National Interest

In the AI Age, Being Smart Will Mean Something Completely Different – Harvard Business Review

Executive Summary

To date, many of us have achieved success by being smarter than other people as measured by grades and test scores, beginning from our early days in school. The smart people were those that received the highest scores by making the fewest mistakes.

AI will change that because there is no way any human being can outsmart, for example, IBMs Watson, at least without augmentation. What is needed is a new definition of being smart, one that promotes higher levels of human thinking and emotional engagement.

Andrew Ng has likened artificial intelligence (AI) to electricity in that it will be as transformative for us as electricity was for our ancestors. I can only guess that electricity was mystifying, scary, and even shocking to them just asAI will be to many of us. Credible scientists and research firms have predicted that the likely automation of service sectors and professional jobs in the United States will be more than 10times as large as the number of manufacturing jobs automated to date. That possibility is mind-boggling.

So, what can we do to prepare for the new world of work? Because AI will be a far more formidable competitor than any human, we will be in a frantic race to stay relevant. That will require us to take our cognitive and emotional skills to a much higher level.

Many experts believe that human beings will still be needed to do the jobs that require higher-order critical, creative, and innovative thinking and the jobs that require high emotional engagement to meet the needs of other human beings. The challenge for many of us is that we do not excel at those skills because of our natural cognitive and emotional proclivities:We are confirmation-seeking thinkers and ego-affirmation-seeking defensive reasoners. We will need to overcome those proclivities in order to take our thinking, listening, relating, and collaborating skills to a much higher level.

I believe that this process of upgrading begins with changing our definition of what it means to be smart. To date, many of us have achieved success by being smarter than other people as measured by grades and test scores, beginning inour early days in school. The smart people were those that received the highest scores by making the fewest mistakes.

AI will change that because there is no way any human being can outsmart, for example, IBMs Watson, at least without augmentation. Smart machines can process, store, and recall information faster and betterthan we humans. Additionally, AI can pattern-match faster and produce a wider array of alternatives than we can. AI can even learn faster. In an age of smart machines, our old definition of what makes a person smart doesnt make sense.

What is needed is a new definition of being smart, one that promotes higher levels of human thinking and emotional engagement. The new smart will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating, and learning. Quantity is replaced by quality. And that shift will enable us to focus on the hard work of taking our cognitive and emotional skills to a much higher level.

We will spend more time training to be open-minded and learning to update our beliefs in response to new data. We will practice adjusting after our mistakes, and we will invest more in the skills traditionally associated with emotional intelligence. The new smart will be about trying to overcome the two big inhibitors of critical thinking and team collaboration: our ego and our fears. Doing so will make it easier to perceive reality as it is, rather than as we wish it to be. In short, we will embrace humility. That is how we humans will add value in a world of smart technology.

Continue reading here:

In the AI Age, Being Smart Will Mean Something Completely Different - Harvard Business Review

Hoffman-Yee research grants focus on AI | Stanford News – Stanford University News

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) today announced six inaugural recipients of the Hoffman-Yee Research Grant Program, a multiyear initiative to invest in research that leverages artificial intelligence (AI) to address real-world problems.

Computer Science Associate Professor Karen Liu and collaborators will research robotic devices to aid in human locomotion using their Hoffman-Yee Grant. (Image credit: Christophe Wu)

The projects were selected for their boldness, ingenuity and potential for transformative impact. The grantees comprise interdisciplinary teams of faculty members, postdoctoral scholars and graduate students spanning the Schools of Business, Education, Engineering, Humanities and Sciences, Law and Medicine.

Philanthropists Reid Hoffman and Michelle Yee are providing foundational support for the grants.

The Hoffman-Yee Research Grant Program is helping to drive new collaborations across campus, harnessing AI to benefit humanity, said Stanford President Marc Tessier-Lavigne. Technological advancements must be inextricably linked to research about their potential societal impacts. I am very grateful to Reid and Michelle for their vision and extraordinary generosity in creating this program.

HAI received submissions from 22 different departments and all of Stanfords seven schools. Each of the six teams selected will receive significant funding to enable ambitious research by assisting with hiring students and postdocs, procuring data and equipment and accessing computational and other resources.

These projects will initiate and sustain exciting new collaborations across the university, said John Etchemendy, Denning Co-Director of HAI and the Patrick Suppes Family Professor in the School of Humanities and Sciences. The interdisciplinary teams each apply AI in a novel context to address challenges whose solutions could bring significant benefits to human wellbeing.

The six projects, which were submitted for review before the COVID-19 pandemic, will push the boundaries of how AI can advance education, health care and government. Project goals range from advancing AI technology through better understanding of human learning, creating more adaptable, collaborative AI agents for a wide range of assistive tasks, applying AI to facilitate and improve student learning, elder care and government operations, and creating tools for understanding the history and evolution of concepts.

Reid Hoffman (Image credit: David Yellen)

Michelle and I are delighted to help enable Stanford HAI to diversify and scale the research community applying artificial intelligence toward a range of major societal issues, said Reid Hoffman. Extraordinary opportunities for discovery and innovation will result from uniting technologists, humanists and educators together to take on pressing challenges that bridge their respective fields.

An entrepreneur, executive and investor, Reid Hoffman plays an integral role in building many of todays leading consumer technology businesses and is chair of the HAI Advisory Council. In 2003 he co-founded LinkedIn, the worlds largest professional networking service. In 2009 he joined Greylock Partners. Reid serves on the boards of multiple companies and nonprofits, including Kiva, Endeavor, CZI Biohub, Do Something and the MacArthur Foundations 100&Change. Michelle Yee earned her undergraduate degree from Stanford and her doctorate in education from the University of San Francisco.

The Hoffman-Yee Research Grant Program provides each award recipient an initial year of research funding, which can potentially be extended to three years. Each of the six research projects was reviewed carefully for ethical risks and benefits to society and subgroups within society as well as the global community.

While the algorithms that drive artificial intelligence may appear to be neutral, the data and applications that shape the outcomes of those algorithms are not. What matters are the people building it, why theyre building it and for whom. AI research must take into account its impact on people, said Fei-Fei Li, Sequoia Professor of Computer Science, Stanford and Denning Co-Director of Stanford HAI. Thats why these research projects are so promising. Each of them can make a significant difference in the lives of ordinary people, supporting HAIs purpose to improve the human condition.

The projects and principal investigators are:

Intelligent Wearable Robotic Devices for Augmenting Human Locomotion

PI: Karen Liu, Associate Professor of Computer ScienceFaculty, postdoctoral scholars and graduate students from Mechanical Engineering, Bioengineering, Orthopedic Surgery and Medicine

Falling injuries among the elderly cost the U.S. health system $50 billion (2015) while causing immeasurable suffering and loss of independence. This research team seeks to develop wearable robotic devices using an AI system that both aids in human locomotion, as well as predicts and prevents falls among older people.

AI Tutors to Help Prepare Students for the 21st Century Workforce

PI: Christopher Piech, Assistant Professor of Computer Science Education.Faculty and postdoctoral scholars from Education, Psychology and Computer Science

The project aims to demonstrate a path to effective, inspiring education that is accessible and scalable. The team will create new AI systems that model and support learners as they work through open-ended activities like writing, drawing, working on a science lab, or coding. The research will monitor learners motivation, identity and competency to improve student learning. Tested solutions will be implemented in code.org, brick-and-mortar schools, virtual science labs and beyond.

Toward Grounded, Adaptive Communication Agents

PI: Christopher Potts, Professor of Linguistics and, by courtesy, Computer ScienceFaculty and postdoctoral scholars from Electrical Engineering, Philosophy, Psychology, Linguistics, Law

This project aims to develop next-generation, language-based virtual agents capable of collaborating with humans on meaningful, challenging tasks such as caring for patients. The research could be particularly impactful for assistive technologies, where a humans behavior and language use will change over repeated interactions with their personal agent.

Curious, Self-aware AI Agents to Build Cognitive Models and Understand Developmental Disorders

PI: Daniel Yamins, Assistant Professor of Psychology and Computer Science.Faculty, postdoctoral scholars and graduate students affiliated with Psychology, Graduate School of Education, Computer Science, School of Medicine

Human children learn about their world and other people as they explore. This project will bring together tools from AI and cognitive and clinical sciences, creating playful, socially interactive artificial agents and improving the understanding and diagnosis of development variability, including Autism Spectrum Disorder. In the process, the team hopes to gain insights into building robots that can handle new environments and interact naturally in social settings.

Reinventing Government with AI: Modern Tax Administration

PI: Jacob Goldin, Associate Professor of LawFaculty, postdoctoral scholars and graduate students from Law, Business, Engineering and Economics

This team seeks to demonstrate how AI-driven, evidence-based learning can benefit U.S. government agencies by driving efficiencies and improving the delivery of services. The team proposes an active-learning system that uses an AI algorithm to decide which tax returns should be prioritized for auditing for a more effective and fairer tax collection system. This research will have implications for a wide range of othergovernmental contexts, including environmental and health compliance.

An AI Time Machine for Investigating the History of Concepts

PI: Dan Jurafsky, Professor of Humanities, Linguistics and Computer ScienceFaculty from English and Digital Humanities, Philosophy, Economics, French, Political Science, History of Science, Sociology, Psychology and Biomedical Data Science

This research will develop new AI technology to examine historical texts in multiple languages to help humanists and social scientists better interpret history and society. Researchers will investigate key questions on morality, immigration, bias, aesthetics and more. Using AI to help analyze how ideas change over time and how thought shapes society could be a breakthrough contribution not only to AI but to the humanities as well.

Read the rest here:

Hoffman-Yee research grants focus on AI | Stanford News - Stanford University News

Why emotion recognition AI can’t reveal how we feel – The Next Web

The growing use of emotion recognition AI is causing alarm among ethicists. They warn that the tech is prone to racial biases, doesnt account for cultural differences, and isused for mass surveillance. Some argue that AIisnt even capable of accurately detecting emotions.

A new study published in Nature Communications has shone further light on these shortcomings.

The researchers analyzed photos of actors to examine whether facial movements reliably express emotional states.

They found that people use different facial movements to communicate similar emotions. One individual may frown when theyre angry, for example, but another would widen their eyes or even laugh.

The research also showed that people use similar gestures to convey different emotions, such as scowling to express both concentration and anger.

Study co-author Lisa Feldman Barrett, a neuroscientist at Northeastern University, said the findings challenge common claims around emotion AI:

Certain companies claim they have algorithms that can detect anger, for example, when what really they have under optimal circumstances are algorithms that can probably detect scowling, which may or may not be an expression of anger. Its important not to confuse the description of a facial configuration with inferences about its emotional meaning.

The researchers used professional actors because they have a functional expertise in emotion: their success depends on them authentically portraying a characters feelings.

The actors were photographed performing detailed, emotion-evoking scenarios. For example, He is a motorcycle dude coming out of a biker bar just as a guy in a Porsche backs into his gleaming Harley and She is confronting her lover, who has rejected her, and his wife as they come out of a restaurant.

The scenarios were evaluated in two separate studies.In the first, 839 volunteers rated the extent to which the scenario descriptions alone evoked one of 13 emotions: amusement, anger, awe, contempt, disgust, embarrassment, fear, happiness, interest, pride, sadness, shame, and surprise.

Next, the researchers used the median rating of each scenario to classify them into 13 categories of emotion.

The team then used machine learning to analyze how the actors portrayed these emotions in the photos.

This revealed that the actors used different facial gestures to portray the same categories of emotions. It also showed that similar facial poses didnt reliably express the same emotional category.

The team then asked additional groups of volunteers to assess the emotional meaning of each facial pose alone.

They found that the judgments of the poses alone didnt reliably match the ratings of the facial expressions when they were viewed alongside the scenarios.

Barrett said this shows the importance of context in our assessments of facial expressions:

When it comes to expressing emotion, a face does not speak for itself.

The study illustrates the enormous variability in how we express our emotions. It also further justifies the concerns around emotion recognition AI, which is already used in recruitment, law enforcement, and education,

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to itright here.

Go here to read the rest:

Why emotion recognition AI can't reveal how we feel - The Next Web

Artificially inflated: It’s time to call BS on AI – InfoWorld

First there was "open washing," the marketing strategy for dressing up proprietary software as open source. Next came "cloud washing," whereby datacenter-bound software products masqueraded as cloud offerings. The same happened to big data, with petabyte-deprived enterprises pretending to be awash in data science.

Now we're into AI-washing -- an attempt to make dumb products sound smart.

Judging by the number of companies talking up their amazing AI projects, the entire Fortune 500 went from bozo status to the Mensa society. Not to rain on this parade, but it's worth remembering that virtually all so-called AI offerings today should be defined as "artificially inflated" rather than "artificially intelligent."

As tweeted by Michael McDonough, global director of economic research and chief economist, Bloomberg Intelligence, the number of mentions of artificial intelligence on earnings calls has exploded since mid-2014:

It's possible that in the last three years, the state of AI has accelerated incredibly fast so that nearly every enterprise now has something worthwhile to say on the subject. More likely, everyone wants on the AI bandwagon, and in the absence of mastery, they're marketing.

AI is, after all, incredibly difficult. Yann LeCun, director of AI research at Facebook, said at a recent O'Reilly conference that "machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan."

Most companies have neither the expertise on staff nor the scale to pull this off. Or, at least, not to an extent worthy of talking about AI initiatives on earnings calls.

Developers recognize this even if their earnings-touting executives don't. For example, as an extensive, roughly 8,500-strong developer survey from VisionMobile uncovers, less than one quarter of developers think AI-driven chatbots are currently worthwhile. While chatbots aren't the only expression of AI, they're one of the most visible examples of hype getting out in front of reality.

I witnessed the sound and fury of AI hype firsthand at Mobile World Congress in Barcelona, where I participated in a panel ("The Future of Messaging: Engagement, eCommerce and Bots") that explored the current and future state of AI as applied to messaging and chatbots. Executives from Google, PayPal, and Sprint joined me, and it quickly became clear that the promise of AI has yet to be realized and won't be for some time. Instead of overpromising a near-term AI future, the session seemed to conclude, it would be best for enterprises to focus on small-scale AI projects that deliver simple but effective consumer value.

For example, machine learning/AI can be used to interpret patterns in X-rays, as Dr. Ziad Obermeyer of Harvard Medical School and Brigham and Women's Hospital and Ezekiel Emanuel, Ph.D., of the University of Pennsylvania, posit in a New England Journal of Medicine article. Deep, mind-blowing AI? Nope. Effective (and likely to render a big chunk of the radiologist population under-employed)? Likely.

The trick to making AI work well is data: lots and lots of data. Most companies simply aren't in a position to gather, create, or harness that data. Google, Apple, Amazon, and Facebook, by contrast, can and do, and yet anyone who has used Amazon's Echo or Apple's Siri knows that the output of their mountains of data is still relatively basic. Each of these companies sees the potential, however, and is ramping up efforts to collect and annotate data. Amazon, for example, has 15,000 to 20,000 low-paid people working behind the scenes on labeling snippets of data. Those people are building toward an AI-driven future, but it's still the future.

So let's not get ahead of ourselves. Everyone may be talking about AI, but it's mostly artificial with precious little intelligence. That's OK, so long as we recognize it as such and build simple services that deliver on their promise.

In sum, we don't need an AI revolution. Evolution will do nicely.

The rest is here:

Artificially inflated: It's time to call BS on AI - InfoWorld

AI, AI, Captain! How the Mayflower Autonomous Ship will cross the Atlantic – VentureBeat

While self-driving cars have hogged the headlinesfor the past few years, other forms of autonomous transport are starting to heat up.

This month, IBM and Promare a U.K.-based marine research and exploration charity will trial a prototype of an artificial intelligence (AI)-powered maritime navigation system ahead of a September 16th venture to send a crewless ship across the Atlantic Ocean on the very same route the original Mayflower traversed 400 years ago.

The original Mayflower ship, which in 1620 carried the first English settlers to the U.S., traveled from Plymouth in the U.K. to what is today known as Plymouth, Massachusetts. Mayflower version 1.0 was a square-rigged sail ship, like many merchant vessels of the era, and relied purely on wind and human navigation techniques to find its way to the New World. The Mayflower Autonomous Ship (MAS), on the other hand, will be propelled by a combination of solar- and wind-generated power, with a diesel generator on board as backup.

Moreover, while the first Mayflower traveled at a maximum speed of around 2.5 knots and took some two months to reach its destination, the upgraded version moves at a giddy 20 knots and should arrive in less than two weeks.

The mission, first announced back in October, aims to tackle all the usual obstacles that come with navigating a ship through treacherous waters, except without human intervention.

The onboard AI Captain, as its called, cant always rely on GPS and satellite connectivity, and speed is integral to processing real-time data. This is why all the AI and navigational smarts must be available locally, making edge computing pivotal to the ventures success.

Edge computing is critical to making an autonomous ship like the Mayflower possible, noted Rob High, IBMs CTO for edge computing. The ship needs to sense its environment, make smart decisions about the situation, and then act on these insights in the minimum amount of time even in the presence of intermittent connectivity, and all while keeping data secure from cyberthreats.

The team behind the new Mayflower has been training the ships AI models for the past few years, using millions of maritime images collected from cameras in the Plymouth Sound, in addition to other open source data sets.

For machine learning prowess, the ship is using an IBM Power AC922 system, which is used in some of the worlds biggest AI supercomputers. Alongside IBMs PowerAI Vision, the Mayflowers AI Captain is built to detect and identify ships and buoys as well as other hazards, including debris and to make decisions about what to do next.

For example, if the MAS encounters a cargo ship that has shed some of its load after colliding with another vessel, the AI Captain will be called into action and can use any combination of onboard sensors and software to circumvent the obstacles. The radar can detect hazards in the water ahead, with cameras providing additional visual data on objects in the water.

Moreover, an automatic identification system (AIS) can tap into specific information about any vessels ahead, including their class, weight, speed, cargo type, and so on. Radio broadcast warnings from the cargo ship can also be accepted and interpreted, with the AI Captain ready to decide on a change of course.

Other data the AI Captain can tap into includes the navigation system and nautical chart server, which provide the current location, speed, course, and route of the ship, as well as attitude sensors for monitoring the state of the sea and a fathometer for water depth.

The onboard vehicle management system also provides crucial data, such as the battery charge level and power consumption, that can be used to determine the best route around a hazardous patch of ocean, with weather forecasts informing the final decision.

Crucially, the AI Captain can communicate vocally with other ships in the vicinity to communicate any change in plans.

The MAS ship itself is still being constructed in Gdansk, Poland, and the AI Captain will be tested this month in a manned research ship called the Plymouth Quest, which is owned by the U.K.s Plymouth Marine Laboratory. The test will essentially determine how the AI Captain performs in real-world scenarios, and feedback will be used to refine the main vessels machine learning smarts before the September launch.

Maritime transport constitutes around 90% of global trade, as its the most cost-effective way of transporting goods in bulk. But shipping is widely regarded as a major source of pollution for the planet. Like self-driving cars, a major benefit of electrified autonomous ships is that they reduce emissions while also promising fewer accidents at least three quarters of maritime accidents are thought to be caused by human error.

Moreover, crewless ships open the doors to longer research missions, as food and salaries are no longer logistical or budgetary considerations.

There has been a push toward fully automating sea-faring transport in recent years. Back in 2016, news emerged that an unmanned warship called Sea Hunter was being developed by research agency DARPA, which passed the Sea Hunter prototype on to the Office of Naval Research two years later for further iteration. In Norway, a crewless cargo ship called the Yara Birkeland has also been in development for the past few year and is expected to go into commercial operation later in 2020. The Norwegian University of Science and Technology (NNTU) has also carried out trialsof atiny electric driverless passenger ferry.

Elsewhere,Rolls-Royce previously demonstrated a fully autonomous passenger ferry in Finland and announced a partnership with Intel as part of a grand plan to bring self-guided cargo ships to the worlds seas by 2025.

So plenty is happening in the self-navigating ship sphere a recent report from Allied Research pegged the industry at $88 billion today, and it could hit $130 billion within a decade. But while others seeks to automate various aspects of shipping, the new Mayflower is designed to be completely self-sufficient and operate without any direct human intervention.

Many of todays autonomous ships are really just automated robots [that] do not dynamically adapt to new situations and rely heavily on operator override, said Don Scott, CTO of the Mayflower Autonomous Ship. Using an integrated suite of IBMs AI, cloud, and edge technologies, we are aiming to give the Mayflower full autonomy and are pushing the boundaries of whats currently possible.

Four centuries after the Mayflower carried the Pilgrims across the Atlantic, we could be entering a new era of maritime adventures.

More here:

AI, AI, Captain! How the Mayflower Autonomous Ship will cross the Atlantic - VentureBeat

Too much AI leaves a longing for the human touch – Frederick News Post (subscription)

This quirky 1950s advertising message, posted line-by-line on a series of small roadside signs, isnt what Denis Sverdlov has in mind.

Sverdlov is CEO of Roborace, a company on the verge of putting driverless electric cars very fast and very smart electric cars on the world racing circuit. He doesnt plan on watching them go around corners and collide.

Sverdlov says his ultimate aim is to develop Artificial Intelligence technology for ordinary cars for ordinary people who just want to relax and read eBooks on the way to the supermarket. His goal is no crunch, no crash routine rides without harm to passenger or vehicle.

Not to cast doubt on his stated motive, but he seems to be having a lot of high-speed fun on the way. He and his team of designers, programmers and engineers have already tested driverless racing Robocars that can do 200 mph and avoid bumping into each other.

Their plan is to put 10 of these full-size, electric-powered machines in Roboraces on the same city street and road race courses being used today In piloted Formula E events around the world. And they aim to do it this year.

High-powered electric racers with cockpits occupied by humans have been dueling ever since the first big Formula E race in Beijing in September 2014. So its possible that the super-slick, futuristic Roborace machines without superstar drivers like Sbastien Olivier Buemi behind the wheel may, indeed, be burning up the course before Santas old-fashioned December 2017 sleigh ride.

And the racing will rapidly get better, Sverdlov says, because the Artificial Intelligence cars will begin to learn on their own, without prompting or help from people.

One example: In a crucial Roborace test, two vehicles were put on a track and allowed to race without human intervention. The competition eventually ended in a crash, but for a 20-lap competition it was a huge success. Whats really, extremely important, Sverdlov told an interviewer, is that those two cars started to understand each other and change their online path planner. In other words, their electronic control systems started behaving like human drivers.

But will these unmanned e-racers erase traditional auto sports? Will they mean the end of the Indianapolis 500 as civilization has come to know it? Will they put the brakes on the Formula One Grand Prix races of Monaco, Spain, Belgium and Malaysia? Will they drop the finish flag on NASCAR?

Robotics is advancing everywhere, driven by ever-improving Artificial Intelligence. It seems to be taking over more and more pieces of our lives, especially in the workplace.

AI, as its known, is posing a really big question: Are we going to outrace ourselves? Are we creating machines that ultimately will leave us behind?

Some philosophers and futurists believe this is the most fundamental challenge facing humanity today. They think its possible well lose control of our lives to machines, systems and automated networks that will take over nearly everything. They say artificial intelligence, while it can bring about dramatic improvement in the ways we do things, could also be our ultimate undoing.

And AI is coming on fast. Not quite 50 years ago, back in the 1970s, I took a tour of a Mack Trucks plant in Hagerstown and listened as our guide marveled at a stork-like machine jerking back and forth, spray-painting engine blocks. It did the job so much better than humans, he said, because a mindless, uncomplaining computer was running the show.

That contraption was primitive by todays standards. Entire manufacturing processes, from front to back, will soon be robotic and will soon be common.

Were seeing some resistance to all this in the growing popularity of maker and artisanal effort. Things made by artisans that is, human beings with their inconsistent, peculiar and even flawed natures are finding a foothold in the marketplace. But theyre often more expensive and harder to get. Will we put up with this inconvenience, or opt for the easy, from-the-automated-factory stuff?

Im not ready to join the Luddites, but Im thinking maybe its time to give horses and horse racing another look. As far as I know, you cant program a jockey and a thoroughbred. They dont go as fast as robocars, but I can understand them.

See original here:

Too much AI leaves a longing for the human touch - Frederick News Post (subscription)

Implementing Illinois AI Video Interview Act: Five Steps Employers Can Take to Address Hidden Questions and Integrate Policies with Existing…

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

See the original post:

Implementing Illinois AI Video Interview Act: Five Steps Employers Can Take to Address Hidden Questions and Integrate Policies with Existing...

AI-Driven Technology to Protect Privacy of Health Data – Analytics Insight

New research derives an AI-based method to protect the privacy of medical images.

On May 24th, researchers from the Technical University of Munich (TUM), Imperial College London, and OpenMined, a non-profit organization published a paper titled End-to-end privacy-preserving deep learning on multi-institutional medical imaging.

The research unveiled PriMIA- Privacy-Preserving Medical Image Analysis that employs securely aggregated federated learning and an encrypted approach towards the data obtained from medical imaging. As the paper states, this technology is a free, open-source software framework. They conducted the experiment on pediatric chest X-Rays and used an advanced level deep convolutional neural network to classify them.

Although there exist conventional methods to safeguard medical data, they often fail or are easily breakable. For example, centralized data sharing methods have proved inadequate to protect sensitive data from attacks. This nascent technology protects data by using federated learning, wherein only the deep learning algorithm is passed on while sharing the medical data and not the actual content. They also applied secured aggregation, which prevents from external entities finding the source where the algorithm was trained. This will not allow anybody to identify the institution where it originated, keeping the privacy intact. The researchers also used another technique to ensure that statistical correlations are derived from the data records and not the individuals contributing the data.

According to the paper, this framework is compatible with a wide variety of medical imaging data formats, easily user-configurable, and introduces functional improvements to FL training. It increases flexibility, usability, security, and performance. PriMIAs SMPC protocol guarantees the cryptographic security of both the model and the data in the inference phase, states the report.

A report by the Imperial College London quotes professor Daniel Rueckert, who co-authored the paper and says, Our methods have been applied in other studies, but we are yet to see large-scale studies using real clinical data. Through the targeted development of technologies and the cooperation between specialists in informatics and radiology, we have successfully trained models that deliver precise results while meeting high standards of data protection and privacy.

With the advent of technology and the rapid adoption of AI, the healthcare sector has been witnessing a digital boom. With electronic health records and the proliferation of telemedicine, there is an abundance of medical data and images generated each day. To enable better patient monitoring, diagnostics, and availability of data, these medical data are often shared across different points and institutions. This AI-driven privacy-preserving technology has a potential role to play here as it does not compromise data privacy while sharing happens. And, data cannot be traced back to individuals, thus protecting their privacy.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

View post:

AI-Driven Technology to Protect Privacy of Health Data - Analytics Insight

Samsung to unveil NEON at CES 2020, teased to be a human-like AI assistant with support for Hindi – India Today

The race for supremacy in the field of Artificial Intelligence (AI) is heating up, with the biggest players in the industry coming up with their own products one after the other. And now Samsung appears to have hinted that it may have a mysterious new product in the pipeline that could be quite a bit special.

Called Neon, the new AI-based product is currently in the works at Samsung Technology & Advanced Research Labs (STAR Labs) an independent entity of Samsung Electronics. While little has been revealed by Neon till now, it will however, be the second AI assistant from Samsung after it introduced Bixby back in 2017.

As of now, Bixby is supported across a range of products and finds a presence not only on its smartphones, but also many of its IoT enabled home appliances. Interestingly, the new AI-based assistant will also have an India connect as its creator, STAR Labs, is currently being led by India-born Pranav Mistry who will be unveiling Neon at CES 2020 in Las Vegas next month.

For Samsungs part, it has remained tight lipped on Neon, and has only teased the product via social media channels. It has also created a website with a domain name Neon life that doesn't reveal any details except showcasing a tagline saying, Have you ever met an Artificial'?

Samsung has also teased that NEON will be a human-level AI, which may heavily depend upon access to a working 5G network. While the teasers do not reveal much, a look at Star Labsgoals reveals that the project is to secure cutting-edge AI core technologies and platformshuman-level AI with the ability to speak, recognize, and thinkto provide new AI-driven experiences and value to its customers.

If this indeed ends up being true, then with Neon we very well may end up having on our hands a highly advanced AI-assistant, one which could think and act like human beings, and may very well be very difficult to differenciate from the real thing in the way it interacts with humans.

Interestingly, Samsung is also using a number of celebrities to drum up interest for this new product. One of these is Shekhar Kapur who recently tweeted, Finally, Artifical Intelligence that will make you wonder which one of you is real. Coming soon from the brilliant mind of @pranavmistry the amazing @neondotlife .. where artificial intelligence ceases to be artificial .. http://neon.life.

More here:

Samsung to unveil NEON at CES 2020, teased to be a human-like AI assistant with support for Hindi - India Today

Real life CSI: Google’s new AI system unscrambles pixelated faces – The Guardian

On the left, 8x8 images; in the middle, the images generated by Google; and on the right, the original 32x32 faces. Photograph: Google

Googles neural networks have achieved the dream of CSI viewers everywhere: the company has revealed a new AI system capable of enhancing an eight-pixel square image, increasing the resolution 16-fold and effectively restoring lost data.

The neural network could be used to increase the resolution of blurred or pixelated faces, in a way previously thought impossible; a similar system was demonstrated for enhancing images of bedrooms, again creating a 32x32 pixel image from an 8x8 one.

Googles researchers describe the neural network as hallucinating the extra information. The system was trained by being shown innumerable images of faces, so that it learns typical facial features. A second portion of the system, meanwhile, focuses on comparing 8x8 pixel images with all the possible 32x32 pixel images they could be shrunken versions of.

The two networks working in harmony effectively redraw their best guess of what the original facial image would be. The system allows for a huge improvement over old-fashioned methods of up-sampling: where an older system might simply look at a block of red in the middle of a face, make it 16 times bigger and blur the edges, Googles system is capable of recognising it is likely to be a pair of lips, and draw the image accordingly.

Of course, the system isnt capable of magic. While it can make educated guesses based on knowledge of what faces generally look like, it sometimes wont have enough information to redraw a face that is recognisably the same person as the original image. And sometimes it just plain screws up, creating inhuman monstrosities. Nontheless, the system works well enough too fool people around 10% of the time, for images of faces.

Running the same system on pictures of bedrooms is even better: test subjects were unable to correctly pick the original image almost 30% of the time. A score of 50% would indicate the system was creating images indistinguishable from reality.

Although this system exists at the extreme end of image manipulation, neural networks have also presented promising results for more conventional compression purposes. In January, Google announced it would use a machine learning-based approach to compress images on Google+ four-fold, saving users bandwidth by limiting the amount of information that needs to be sent. The system then makes the same sort of educated guesses about what information lies between the pixels to increase the resolution of the final picture.

Excerpt from:

Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian

How AI will automate cybersecurity in the post-COVID world – VentureBeat

By now, it is obvious to everyone that widespread remote working is accelerating the trend of digitization in society that has been happening for decades.

What takes longer for most people to identify are the derivative trends. One such trend is that increased reliance on online applications means that cybercrime is becoming even more lucrative. For many years now, online theft has vastly outstripped physical bank robberies. Willie Sutton said he robbed banks because thats where the money is. If he applied that maxim even 10 years ago, he would definitely have become a cybercriminal, targeting the websites of banks, federal agencies, airlines, and retailers. According to the 2020 Verizon Data Breach Investigations Report, 86% of all data breaches were financially motivated. Today, with so much of societys operations being online, cybercrime is the most common type of crime.

Unfortunately, society isnt evolving as quickly as cybercriminals are. Most people think they are only at risk of being targeted if there is something special about them. This couldnt be further from the truth: Cybercriminals today target everyone. What are people missing? Simply put: the scale of cybercrime is difficult to fathom. The Herjavec Group estimates cybercrime will cost the world over $6 trillion annually by 2021, up from $3 trillion in 2015, but numbers that large can be a bit abstract.

A better way to understand the issue is this: In the future, nearly every piece of technology we use will be under constant attack and this is already the case for every major website and mobile app we rely on.

Understanding this requires a Matrix-like radical shift in our thinking. It requires us to embrace the physics of the virtual world, which break the laws of the physical world. For example, in the physical world, it is simply not possible to try to rob every house in a city on the same day. In the virtual world, its not only possible, its being attempted on every house in the entire country. Im not referring to a diffuse threat of cybercriminals always plotting the next big hacks. Im describing constant activity that we see on every major website the largest banks and retailers receive millions of attacks on their users accounts every day. Just as Google can crawl most of the web in a few days, cybercriminals attack nearly every website on the planet in that time.

The most common type of web attack today is called credential stuffing. This is when cybercriminals take stolen passwords from data breaches and use tools to automatically log in to every matching account on other websites to take over those accounts and steal the funds or data inside them. These account takeover (ATO) events are possible because people frequently reuse their passwords across websites. The spate of gigantic data breaches in the last decade has been a boon for cybercriminals, reducing cybercrime success to a matter of reliable probability: In rough terms, if you can steal 100 users passwords, on any given website where you try them, one will unlock someones account. And data breaches have given cybercriminals billions of users passwords.

Above: Source: Attacks Against Financial Services via F5 Security Incident Response Team in 2017-2019

Whats going on here is that cybercrime is a business, and growing a business is all about scale and efficiency. Credential stuffing is only a viable attack because of the large-scale automation that technology makes possible.

This is where artificial intelligence comes in.

At a basic level, AI uses data to make predictions and then automates actions. This automation can be used for good or evil. Cybercriminals take AI designed for legitimate purposes and use it for illegal schemes. Consider one of the most common defenses attempted against credential stuffing CAPTCHA. Invented a couple of decades ago, CAPTCHA tries to protect against unwanted bots by presenting a challenge (e.g., reading distorted text) that humans should find easy and bots should find difficult. Unfortunately, cybercriminal use of AI has inverted this. Google did a study a few years ago and found that machine-learning based optical character recognition (OCR) technology could solve 99.8% of CAPTCHA challenges. This OCR, as well as other CAPTCHA-solving technology, is weaponized by cybercriminals who include it in their credential stuffing tools.

Cybercriminals can use AI in other ways too. AI technology has already been created to make cracking passwords faster, and machine learning can be used to identify good targets for attack, as well as to optimize cybercriminal supply chains and infrastructure. We see incredibly fast response times from cybercriminals, who can shut off and restart attacks with millions of transactions in a matter of minutes. They do this with a fully automated attack infrastructure, using the same DevOps techniques that are popular in the legitimate business world. This is no surprise, since running such a criminal system is similar to operating a major commercial website, and cybercrime-as-a-service is now a common business model. AI will be further infused throughout these applications over time to help them achieve greater scale and to make them harder to defend against.

So how can we protect against such automated attacks? The only viable answer is automated defenses on the other side. Heres what that evolution will look like as a progression:

Right now, the long tail of organizations are at level 1, but sophisticated organizations are typically somewhere between levels 3 and 4. In the future, most organizations will need to be at level 5. Getting there successfully across the industry requires companies to evolve past old thinking. Companies with the war for talent mindset of hiring huge security teams have started pivoting to also hire data scientists to build their own AI defenses. This might be a temporary phenomenon: While corporate anti-fraud teams have been using machine learning for more than a decade, the traditional information security industry has only flipped in the past five years from curmudgeonly cynicism about AI to excitement, so they might be over-correcting.

But hiring a large AI team is unlikely to be the right answer, just as you wouldnt hire a team of cryptographers. Such approaches will never reach the efficacy, scale, and reliability required to defend against constantly evolving cybercriminal attacks. Instead, the best answer is to insist that the security products you use integrate with your organizational data to be able to do more with AI. Then you can hold vendors accountable for false positives and false negatives, and the other challenges of getting value from AI. After all, AI is not a silver bullet, and its not sufficient to simply be using AI for defense; it has to be effective.

The best way to hold vendors accountable for efficacy is by judging them based on ROI. One of the beneficial side effects of cybersecurity becoming more of an analytics and automation problem is that the performance of all parties can be more granularly measured. When defensive AI systems create false positives, customer complaints rise. When there are false negatives, ATOs increase. And there are many other intermediate metrics companies can track as cybercriminals iterate with their own AI-based tactics.

If youre surprised that the post-COVID Internet sounds like its going to be a Terminator-style battle of good AI vs. evil AI, I have good news and bad news. The bad news is, were already there to a large extent. For example, among major retail sites today, around 90% of login attempts typically come from cybercriminal tools.

But maybe thats the good news, too, since the world obviously hasnt fallen apart yet. This is because the industry is moving in the right direction, learning quickly, and many organizations already have effective AI-based defenses in place. But more work is required in terms of technology development, industry education, and practice. And we shouldnt forget that sheltering-in-place has given cybercriminals more time in front of their computers too.

Shuman Ghosemajumder is Global Head of AI at F5. He was previously CTO of Shape Security, which was acquired by F5 in 2020, and was Global Head of Product for Trust & Safety at Google.

Read more:

How AI will automate cybersecurity in the post-COVID world - VentureBeat

A voice-over artist asks: Will AI take her job? – WHYY

This story is from The Pulse, a weekly health and science podcast.

Subscribe on Apple Podcasts, Stitcher or wherever you get your podcasts.

My name is Nikki Thomas, and I am a voice-over artist. I speak into a microphone, and my voice is captured. I can change my accent. My pitch. My mood.

But its still me, right? Until its not. Because I am being replaced by my own voice an AI version of my voice.

It starts with TTS, or text-to-speech. Thats the same technology used to create Siri or Alexa. It captures a human voice and then artificially replicates that sound to read any digital text out loud.

I got hired for a TTS job. I delivered my spoken words to the client. Then a few weeks later, I could type words into a text box, and my voice clone said them back to me.

I asked longtime client and audio engineer Daren Lake to compare the two. And while concluding that the AI voice actually sounded pretty good, he could still hear that a robot made it.

Its got these warbling artifacts. I call it the zoom effect or the matrix sound, he said. Despite thinking I might be able to get away with it, the engineer in him didnt like it.

So one can tell the difference now. But when this technology gets better, could this be my new method of work? I record just a few voice samples and, before I know it, an 11-hour audiobook is produced with a voice that sounds just like mine in the time it takes me to copy and paste a document? It would be much more accurate, and reliable. An AI voice never fatigues or needs a week to recover from the flu.

Could I still consider myself a voice-over artist? If theres even a role for me. How will artificial intelligence affect creativity and artistry?

I took the question to Sarah Rose Siskind, one of the creators of a robot named Sophia. Sarah laughed when I asked if she was threatened by a robot taking her job. She told me about an 11-hour day spent getting Sophia to wink reason enough for her to believe her job was not at risk.

Sophia the Robot is an interviewer, guest speaker and host with over 16,000 YouTube subscribers. Siskind was on the writing team and worked with a group to shape Sophias personality.

An artist is a major component of her personality because we wanted her personality to be fascinated with areas not traditionally considered the domain of robots, Siskind said. However, it is hard to describe her outside of a relationship to the humans who came up with the idea of creating her.

Visit link:

A voice-over artist asks: Will AI take her job? - WHYY

AI is our best weapon against terrorist propaganda – The Next Web – TNW

In the past four months alone, there have been three separate terrorist attacks across the UK (and possibly a third reported just today) and thats after implementing efforts that the Defense Secretary claimed helped in thwarting 12 other incidents there in the previous year.

That spells a massive challenge for companies investing in curbing the spread of terrorist propaganda on the web. And although itd most certainly be impossible to stamp out the threat across the globe, its clear that we can do a lot more to tackle it right now.

Last week, we looked at some steps that Facebook is taking to wipe out content promoting and sympathizing with terrorists causes, which involve the use of AI and relying on reports from users, as well as the skills of a team of 150 experts to identify and take down hate-filled posts before they spread across the social network.

Now, Google has detailed the measures its implementing in this regard as well. Similar to Facebook, its targeting hateful content with machine learning-based systems that can sniff it out, and also working with human reviewers and NGOs in an attempt to introduce a nuanced approach to censoring extremist media.

The trouble is, battling terrorism isnt what these companies are solely about; theyre concerned about growing their user bases and increasing revenue. The measures they presently implement will help sanitize their platforms so theyre more easily marketable as a safe place to consume content, socialize and shop.

Meanwhile, the people who spread propaganda online dedicate their waking hours to finding ways to get their message out to the world. They can, and will continue to innovate so as to stay ahead of the curve.

Ultimately, whats needed is a way to reduce the effectiveness of this propaganda. There are a host of reasons why people are susceptible to radicalization, and those may be far beyond the scope of the likes of Facebook to tackle.

AI is already being used to identify content that human response teams review and take down. But I believe that its greater purpose could be to identify people who are exposed to terrorist propaganda and are at risk of being radicalized. To that end, theres hope in the form of measures that Google is working on. In the case of its video platform YouTube, the company explained in a blog post:

Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the Redirect Method more broadly across Europe.

This promising approach harnesses the power of targeted online advertising to reach potential ISIS recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.

In March, Facebook began testing algorithms that could detect warning signs of users in the US suffering from depression and possibly contemplating self-harm and suicide. To do this, it looks at whether people are frequently posting messages describing personal pain and sorrow, or if several responses from their friends read along the lines of, Are you okay? The company then contacts at-risk users to suggest channels they can seek out for help with their condition.

I imagine that similar tools could be developed to identify people who might be vulnerable to becoming radicalized perhaps by analyzing the content of the posts they share and consume, as well as the networks of people and groups they engage with.

The ideas spread by terrorists are only as powerful as they are widely accepted. It looks like well constantly find ourselves trying to outpace measures to spread propaganda, but what might be of more help is a way to reach out to people who are processing these ideas, accepting them as truth and altering the course their lives are taking. With enough data, its possible that AI could be of help but in the end, well need humans to talk to humans in order to fix whats broken in our society.

Naturally, the question of privacy will crop up at this point and its one that well have to ponder before giving up our rights but its certainly worth exploring our options if were indeed serious about quelling the spread of terrorism across the globe.

Read next: How secure is your favorite messaging app?

Here is the original post:

AI is our best weapon against terrorist propaganda - The Next Web - TNW

Nuclear Fusion and Artificial Intelligence: the Dream of Limitless Energy – AI Daily

Ever since the 1930s when scientists, namely Hans Bethe, discovered that nuclear fusion was possible, researchers strived to initiate and control fusion reactions to produce useful energy on Earth. The best example of a fusion reaction is in the middle of stars like the Sun where hydrogen atoms are fused together to make helium releasing a lot of energy that powers the heat and light of the star. On Earth, scientists need to heat and control plasma, an ionised state of matter similar to gas, to cause particles to fuse and release their energy. Unfortunately, it is very difficult to start fusion reactions on Earth, as they require conditions similar to the Sun, very high temperature and pressure, and scientists have been trying to find a solution for decades.

In May 2019, a workshop detailing how fusion could be advanced using machine learning was held that was jointly supported by the Department of Energy Offices of Fusion Energy Science (FES) and Advanced Scientific Computing Research (ASCR). In their report, they discuss seven 'priority research opportunities':

'Science Discovery with Machine Learning' involves bridging gaps in theoretical understanding via identification of missing effects using large datasets; the acceleration of hypothesis generation and testing and the optimisation of experimental planning. Essentially, machine learning is used to support and accelerate the scientific process itself.

'Machine Learning Boosted Diagnostics' is where machine learning methods are used to maximise the information extracted from measurements, systematically fuse multiple data sources and infer quantities that are not directly measured. Classifcation techniques, such as supervised learning, could be used on data that is extracted from the diagnostic measurements.

'Model Extraction and Reduction' includes the construction of models of fusion systems and the acceleration of computational algorithms. Effective model reduction can result in shorten computation times and mean that simulations (for the tokamak fusion reactor for example) happen faster than real-time execution.

'Control Augmentation with Machine Learning'. Three broad areas of plasma control research would benefit significantly from machine learning: control-level models, real-time data analysis algorithms; optimisation of plasma discharge trajectories for control scenarios. Using AI to improve control mathematics could manage the uncertainty in calculations and ensure better operational performance.

'Extreme Data Algorithms' involves finding methods to manage the amount and speed of data that will be generated during the fusion models.

'Data-Enhanced Prediction' will help monitor the health of the plant system and predict any faults, such as disruptions which are essential to be mitigated.

'Fusion Data Machine Learning Platform' is a system that can manage, format, curate and enable the access to experimental and simulation data from fusion models for optimal usability when used by machine learning algorithms.

Read more:

Nuclear Fusion and Artificial Intelligence: the Dream of Limitless Energy - AI Daily

Why The Future of Cybersecurity Needs Both Humans and AI Working Together – Security Boulevard

As we look to the future of cybersecurity, we must consider the recent past and understand what the pandemic has taught us about our security needs.

Many cybersecurity platforms proved inadequate when a large percentage of the worlds workforce abruptly shifted to remote work in the Spring of 2020. Companies found themselves fighting against the limitations of their own cybersecurity platforms.

Modern systems enhanced with self-learning AI capabilities have fared best in the face of the pandemics impact on networking.

For others, immediate, manual interventions were the only thing standing between enterprise security and the bad actors who had been standing by waiting for a global event of this scale.

They swooped in almost immediately, targeting governments and hospital systems, and a wide swath of commercial enterprises. Everything from ransomware to DDOS to phishing schemes ramped up right alongside the upheaval so many companies were experiencing in the early days of the pandemic.

Many inadequate systems were enhanced with some form of AI, but relied on what employees had taught them. No one could have predicted such a dramatic shift in behavior, but systems that were trained to alert on unexpected behavior like a sudden rush of remote connections floundered.

Security analysts were unable to keep up with the constant stream of false positives. Threat hunting is time-consuming for teams under typical network conditions. The pandemic exacerbated this challenge.

Bad actors had been standing by, waiting for an event that would impact thousands of global networks all at once.

As companies examine their security systems, the question theyll need to answer isnt Should we bring AI on board, but rather What kind and how much AI do we need?

A recent WhiteHat Security survey revealed that more than 70 percent of respondents cited AI-based tools as contributing to more efficiency. More than 55 percent of mundane tasks have been replaced by AI, freeing up analysts for other departmental tasks.

Still, not all enterprises or employees are excited by the prospect of bringing more AI on board, especially AI that requires less intervention. This is an understandable response employees worry that AI will replace their jobs.

Multitalented human employees are not only part of the self-learning AI solution, they are integral. Respondents to the WhiteHat survey cited the importance of creativity and experience as critical for adequate security.

A combined approach appears to be the likeliest reliable cybersecurity approach going forward. Security teams that incorporate AI to handle mundane tasks and reduce overarching issues like false positives and focus on the human element will fare better.

Third-wave self-supervised AI platforms handle unusual network activity with more nuance. When the shift to remote work hit these networks, self-learning AI quickly reestablished a new normal. Instead of triggering hundreds or thousands of false positives, these systems rapidly adjusted and started looking for behavior that didnt mean the new frame of reference.

In the meantime, security analysts could focus on shoring up vulnerabilities created by the pandemic in other ways.

Creative problem solving has never been as crucial for teams facing the unprecedented challenges of today. Qualities like intuition and experience-based decision-making are invaluable, and even the most advanced AI cannot replace them.

What machines can do is augment the important, nuanced work that human security professionals do. Talented security analysts waste time sifting through false positives and handling many other mundane tasks while keeping a constant eye on the network.

Tools that reduce manual interventions also reduce errors and improve employee satisfaction.

Machines will never be able to entirely replicate or take over the work security professionals do, so its essential for companies to look for security platforms that underscore the talents of human security analysts. Security teams that view AI as one part of a complete, multi-faceted approach will benefit the most from these improvements.

Future-facing companies must evaluate their ability to weather the cybersecurity emergencies of tomorrow. Typical AI-enhanced platforms can help but are fundamentally limited. Without a complete understanding of your networks baseline and how it can change in response to unexpected events, no security platform can detect every threat.

MixModes third-wave AI solution develops an accurate, evolving baseline of network behavior and then responds smartly to aberrations and unexpected network behavior.

Reach out to our client service team today to set up a demo.

Our Q2 Top Cybersecurity Insights

NTA and NDR: The Missing Piece

The Problem with Relying on Log Data for Cybersecurity

The (Recent) History of Self-Supervised Learning

Guide: The Next Generation SOC Tool Stack The Convergence of SIEM, NDR and NTA

Redefining the Definition of Baseline in Cybersecurity

MixMode CTO Responds to Self-Supervised AI Hopes

Why Training Matters And How Adversarial AI Takes Advantage of It

Read more from the original source:

Why The Future of Cybersecurity Needs Both Humans and AI Working Together - Security Boulevard

DeepMind compares the way children and AI explore – VentureBeat

In a preprint paper, researchers at Alphabets DeepMind and the University of California, Berkeley propose a framework for comparing the ways children and AI learn about the world. The work, which was motivated by research suggesting childrens learning supports behaviors later in life, could help close the gap between AI and humans when it comes to acquiring new abilities. For instance, it might lead to robots that can pick and pack millions of different kinds of products while avoiding various obstacles.

Exploration is a key feature of human behavior, and recent evidence suggests children explore their surroundings more often than adults. This is thought to translate to more learning that enables powerful, abstract task generalization a type of generalization AI agents could tangibly benefit from. For instance, in one study, preschoolers who played with a toy developed a theory about how the toy functioned, such as determining whether its blocks worked based on their color, and they used this theory to make inferences about a new toy or block they hadnt seen before. AI can approximate this kind of domain and task adaptation, but it struggles without a degree of human oversight and intervention.

The DeepMind approach incorporates an experimental setup built atop DeepMind Lab, DeepMinds Quake-based learning environment comprising navigation and puzzle-solving tasks for learning agents. The tasks require physical or spatial navigation skills and are modeled after games children play. In the setup, children are allowed to interact with DeepMind Lab through a custom Arduino-based controller, which exposes the same four actions agents would use: move forward, move back, move left, and turn right.

During experiments approved by UC Berkeleys institutional review board, the researchers attempted to determine two things:

In one test, children were told to complete two mazes one after another each with the same layout. They explored freely in the first maze, but in the second they were told to look for a gummy.

The researchers say that in the no-goal condition the first maze the childrens strategies closely resembled that of a depth-first search (DFS) AI agent, which pursues an unexplored path until it reaches a dead-end and then turns around to explore the last path it saw. The children made choices consistent with DFS 89.61% of the time compared to the goal condition (the second maze), in which they made choices consistent with DFS 96.04% of the time. Moreover, children who explored less than their peers took the longest to reach the goal (95 steps on average), while those who explored more found the gummy in the least amount of time (66 steps).

The team notes that these behaviors are in contrast with the techniques used to train AI agents, which often depend on having the agent stumble upon an interesting area by chance and then encouraging it to revisit that area until it is no longer interesting. Unlike humans, which are prospective explorers, AI agents are retrospective.

In another test, children aged four to six were told to complete two mazes in three phases. In the first phase, they explored the maze in a no-goal condition, a sparse condition with a goal and no immediate rewards, and a dense condition with both a goal and rewards leading up to it. In the second phase, the children were tasked with once again finding the goal item, which was in the same location as during exploration. In the final phase, they were asked to find the goal item but with the optimal route to it blocked.

Initial data suggests that children are less likely to explore an area in the dense rewards condition, according to the researchers. However, the lack of exploration doesnt hurt childrens performance in the final phase. This isnt true of AI agents typically, dense rewards make agents less incentivized to explore and lead to poor generalization.

Our proposed paradigm [allows] us to identify the areas where agents and children already act similarly and those in which they do not, concluded the coauthors. This work only begins to touch on a number of deep questions regarding how children and agents explore In asking [new] questions, we will be able to acquire a deeper understanding of the way that children and agents explore novel environments, and how to close the gap between them.

More here:

DeepMind compares the way children and AI explore - VentureBeat

IBM, Salesforce Strike Global Partnership on Cloud, AI – Fortune

Are two clouds better than one?

How about two nerdily-named artificial intelligence platforms?

According to IBM and Salesforce , the answer to both of those questions is yes.

The two Fortune 500 companies on Monday afternoon revealed a sweeping global strategic partnership that aligns one iconic company's multiyear turnaround effort with another's staggering growth ambitions . According to the terms of the deal, IBM and Salesforce will integrate artificial intelligence platforms (Watson and Einstein, respectively) and some of their software and services (e.g. a Salesforce component to ingest The Weather Companys meteorological data). IBM will also deploy Salesforce Service Cloud internally in a sign of goodwill.

Why not go it alone? Fortune spoke on the phone with IBM CEO Ginni Rometty and Salesforce CEO Marc Benioff to get a better understanding of the motives behind the deal. What follows is a transcript of that conversation, edited and condensed.

Fortune : Hi, guys. So what's this all about?

Benioff : It's great to connect with you again. Artificial intelligence is really accelerating our customers' success and they're finding tremendous value in this new technology. The spring release of Salesforce Einstein has opened our eyes to what's possible. We now have thousands of customers who have deployed this next-generation artificial intelligence capability. I'll tell you, specifically with our Sales Cloud customers, it creates this incredible level of productivity. Sales executives are way more productive than ever beforethe ability to do everything from lead scoring to opportunity insights really opened my eyes that this is possible. So the more value in artificial intelligence we can provide our customers, the more successful they'll be, which is why we're doing this relationship with IBM.

We're able to give our customers the incredible capabilities of not only Einstein but Watson. When you look at the industries we cater toretail, financial services, healthcarethe data and insights that Watson can provide our customers is really incredible. And we're also thrilled that IBM has agreed to use Salesforce products internally as well. This is really taking our relationship to a whole new level.

Rometty : Andrew, thank you for taking the time. This announcement is both strategic and significant. I do think it's really going to take AI further into the enterprise. I think about 2017 as the year when we're going to see AI hit the world at scale. It's the beginning of an era that's going to run a decade or decades in front of us. Marc's got thousands of clients; by the end of this year we'll have a billion people touched by Watson. We both share that vision. An important part of it is the idea that every professional can be aided by this kind of technology. It takes all the noise and turns it into something on which they can take action. It isn't just a sales process; we're going to link other processes across a company. We're talking about being able to augment what everyone doesaugment human intelligence. Together, this will give us the most complete understanding of a customer anywhere.

For our joint customers, to me, this is a really big deal. Take an insurance companyMarc's got plenty of them as clients. You link to insights around weather, hook that into a particular region, tell people to move their cars inside because of hail. You might even change a policy. These two things together do really allow clients to be differentiated.

This is the beginning of a journey together.

I thought this was the brainiest deal I've ever heard of, with Watson and Einstein together.

Rometty : It's good comedy.

Like any two large tech companies, you compete in areas and collaborate in othersfrenemies. Why did you engage in this partnership? Any executive asks themselves: build, buy, or partner. Why partner this time?

Benioff : I'll give you my honest answer here, which is that I've always been a huge fan of IBMGinni knows that. When I look at pioneering values in businesscompanies that have done it right and really stuck to their principles over generationsI really look to IBM as a company that has deeply inspired me personally as I built Salesforce over the last 18 years. We're going to be 18 years old on March 8th. When I look at what we've gone through in the last two decades, I really think that it's our values that have guided us and how those values have been inspired by many of the things at IBM.

Number two is, Ginni made a strategic decision to acquire Bluewolf, which is a company that we had worked very hard to nurture and incubate over a very long period of time. It really demonstrated to me that the opportunity to form a strategic relationship with IBM was possible. We both have this incredible vision for artificial intelligence but we're coming at it from very different areas. [Salesforce is] coming at it from a declarative standpoint, expressed through our platform, for our customer relationship management system. IBM's approach, which is pioneering, especially when it comes to key verticals like retail or finance or healthcarethese are complements. These are the best of both worlds for artificial intelligence. These are the two best players coming together. We have almost no overlap in our businesses. We have really a desire to make our customers more successful.

Rometty : Beautifully said. And I'll only add a couple of points. Not only sharing values as companies but in terms of how we look at our customers. We share over 5,000 joint clients. But more importantly, think about this era of AI. There are different approaches you can take. What Marc's done with Einsteinthink of it as CRM as a process. What we've done with Watsonthink of it as an industry in depth. We do have very little overlap. Why we talk about Watson as a platform is to be integrated with things like what Marc's doing.

Let me ask you about AI. It's been in development for decades, but the current wave is nascent. How do you each see AI as part of the success of your companies? It's a capabilityno one goes to the store to buy AI. Hopefully it solves their problems. But AI can be anything.

Rometty : I view AI as fundamental to IBM. Watson on the IBM cloudthat's a fundamental and enduring platform. We've built platforms for ourselves before: mainframe, middleware, managed services. This is now the era of AI. It will be a silver thread through all of what IBM does.

Is it fair to say that you guys aren't trying to compete on AI? I don't mean between youI mean within the greater industry.

Rometty : We're absolutely complementary. Clients will make some architectural decisions here. Everyone's gonna pick some platforms to use. They will pick them around AI. By the way, there are stages: the most basic is machine learning, then AI, then cognitive [computing]. What we're doing with Marc goes all the way into cognitive. Just to be clear.

Benioff : I could not agree more. We brought our customers into the cloud, then into the social world, then into the mobile world. Now we're bringing them into the AI world.

This is really beyond my wildest dreams in terms of what's possible today. And by the waythat we're able to replace Microsoft's products [at IBM] is a bonus for us. (laughs)

Read more from the original source:

IBM, Salesforce Strike Global Partnership on Cloud, AI - Fortune

5G, AI & IoT : IBM and Verizon Business Close to Edge of "Virtually Mobile" – AiThority

IBM and Verizon Business are collaborating on a global scale to bring AI capabilities at Enterprise Edge for 5G networking. There has been a tectonic shift in the Enterprise Edge industry leveraging AI and the Internet of Things capabilities to push the bar higher for 5G networking. With Verizon Business, IBM intends to bring AI and IoT capabilities at the center of every Cloud-driven enterprise digital transformation.

Read More:AiThority Interview With Bob Lord, SVP Of Cognitive Applications At IBM

5G technology is the mother of all emerging technologies.

We are already witnessing a rise in the number of Edge devices used in fintech, healthcare and manufacturing services, and smart city architecture around the 5G networking platforms. These are all part of the larger global picture we call it the Future of Industry 4.0. AI, alone, couldnt have helped companies achieve all the promises that were made in the first two decades of the century. Its a different industry altogether with the exploding Big Data pushing hard on AI, Machine Learning, Telecom and Internet connectivity.

IBMs partnership with Verizon will bring A, IoT and 5G technologies to co-invent Multi-access Edge Compute (MEC) capabilities for a wide range of applications. These are tested and analyzed for efficiency, accuracy, quality, and availability at an industrial level.

Industrial customers would benefit from Verizons 5G MEC expertise and IBMs Cloud and AI capabilities. Next-gen high-speed internet connectivity with low latency underlines the future of Industrial Revolution 4.0.

Many industrial enterprises are today seeking ways to use edge computing to accelerate access to near real-time, actionable insights into operations to improve productivity and reduce costs.

Edge Computing ensures low latency with trust-worthy computing and storage capabilities. Edge Virtualization is one of the fastest-growing data center infrastructure management trends that have the power to completely take out traditional IT challenges. With AI and Sensors, these data centers can be managed better, with real-time analytics and predictive intelligence becoming the new gold-standards of future IT businesses.

Edge computings decentralized architecture brings technology resources closer to where data is generated i.e., where devicesare located inan industrial site.

Edge computing provides these benefits:

We see the role of 5G networking in Edge Computing very clearly. 5G is the most advanced standard in global wireless mobile networking. 5G holds the key to connecting every device mobile, car, objects, devices, (some say, even digital pacemakers and nano-brains) with a secured mobile network. To understand each of these interactions with the Edge device on 5G in real-time, we would need AI and the power of automation.

With 5Gs low latency, high download speeds and capacity, any user can increase the number of devices that can be supported within the same geographic area. IoT devices on the 5G network could amplify the pace at which organizations interact with these devices in near real-time, with computing power in the proximity of the device. The proximity factor of data storage and computing is what makes Edge and 5G combination so powerful.

This could mean that innovative new applications such asremote controlrobotics, near real-time cognitive video analysis and plant automation may now be possible.

IBM and Verizon Business are already eyeing their market with advanced Mobile Asset Tracking and management solutions. The mobile asset tracking solutions would help enterprises improve operations, optimize production quality, and help clients enhance worker safety from any location. This would be an advantage that could play out very well in any situation that arises from a COVID-19-like pandemic or catastrophes.

At the time of this announcement, Tami Erwin, CEO,VerizonBusiness said,

This collaboration (with IBM) is all about enabling the future of industry in the Fourth Industrial Revolution. Combining the high speed and low latency ofVerizons 5G UWB Network and MEC capabilities with IBMs expertise in enterprise-grade AI and production automation can provide industrial innovation on a massive scale and can help companies increase automation, minimize waste, lower costs, and offer their own clients a better response time and customer experience.

Virtually Mobile All the Way: The AI + IoT Roadmap for a 5G Future

IBM would leverage Verizons wireless networks, including Verizons 5G Ultra-Wideband (UWB) network, and Multi-access Edge Computing (MEC) capabilities. Verizon would also provide its ThingSpace IoT Platform and Critical Asset Sensor solution (CAS) to jointly develop new 5G-based IoT Edge networks with IBM.

These will be jointly offered with IBMs market-leading Maximo Monitor with IBM Watson and advanced analytics. The combined solutions could help clients detect, locate, diagnose and respond to system anomalies, monitor asset health, and help predict failures in near real-time.

IBM andVerizonare also working on potential combined solutions for 5G and MEC-enabled use cases such as near real-time cognitive automation for the industrial environment.

Bob Lord, Senior Vice President, Cognitive Applications, Blockchain and Ecosystems, IBM says

The industrial sector is undergoing an unprecedented transformation as companies begin to return to full-scale operations, aided by new technology to help reduce costs and increase productivity. Through this collaboration, we plan to build upon our longstanding relationship with Verizon to help industrial enterprises capitalize on joint solutions that are designed to be multi-Cloud ready, secured and scalable, from the data center all the way out to the enterprise Edge.

Verizonand IBM also plan to collaborate on potential joint solutions to address worker safety, predictive maintenance, product quality and production automation.

(To share your views on the role of AI, IoT and 5G on the Future of Enterprise IT, please write to us at sghosh@martechseries.com)

Share and Enjoy !

Visit link:

5G, AI & IoT : IBM and Verizon Business Close to Edge of "Virtually Mobile" - AiThority