New report assesses progress and risks of artificial intelligence – Brown University

Posted: September 20, 2021 at 8:43 am

While many reports have been written about the impact of AI over the past several years, the AI100 reports are unique in that they are both written by AI insiders experts who create AI algorithms or study their influence on society as their main professional activity and that they are part of an ongoing, longitudinal, century-long study, said Peter Stone, a professor of computer science at the University of Texas at Austin, executive director of Sony AI America and chair of the AI100 standing committee. The 2021 report is critical to this longitudinal aspect of AI100 in that it links closely with the 2016 report by commenting on what's changed in the intervening five years. It also provides a wonderful template for future study panels to emulate by answering a set of questions that we expect future study panels to reevaluate at five-year intervals.

Eric Horvitz, chief scientific officer at Microsoft and co-founder of the One Hundred Year Study on AI, praised the work of the study panel.

"I'm impressed with the insights shared by the diverse panel of AI experts on this milestone report," Horvitz said. The 2021 report does a great job of describing where AI is today and where things are going, including an assessment of the frontiers of our current understandings and guidance on key opportunities and challenges ahead on the influences of AI on people and society.

In terms of AI advances, the panel noted substantial progress across subfields of AI, including speech and language processing, computer vision and other areas. Much of this progress has been driven by advances in machine learning techniques, particularly deep learning systems, which have made the leap in recent years from the academic setting to everyday applications.

In the area of natural language processing, for example, AI-driven systems are now able to not only recognize words, but understand how theyre used grammatically and how meanings can change in different contexts. That has enabled better web search, predictive text apps, chatbots and more. Some of these systems are now capable of producing original text that is difficult to distinguish from human-produced text.

Elsewhere, AI systems are diagnosing cancers and other conditions with accuracy that rivals trained pathologists. Research techniques using AI have produced new insights into the human genome and have sped the discovery of new pharmaceuticals. And while the long-promised self-driving cars are not yet in widespread use, AI-based driver-assist systems like lane-departure warnings and adaptive cruise control are standard equipment on most new cars.

Some recent AI progress may be overlooked by observers outside the field, but actually reflect dramatic strides in the underlying AI technologies, Littman says. One relatable example is the use of background images in video conferences, which became a ubiquitous part of many people's work-from-home lives during the COVID-19 pandemic.

To put you in front of a background image, the system has to distinguish you from the stuff behind you which is not easy to do just from an assemblage of pixels, Littman said. Being able to understand an image well enough to distinguish foreground from background is something that maybe could happen in the lab five years ago, but certainly wasnt something that could happen on everybodys computer, in real time and at high frame rates. Its a pretty striking advance.

As for the risks and dangers of AI, the panel does not envision a dystopian scenario in which super-intelligent machines take over the world. The real dangers of AI are a bit more subtle, but are no less concerning.

Some of the dangers cited in the report stem from deliberate misuse of AI deepfake images and video used to spread misinformation or harm peoples reputations, or online bots used to manipulate public discourse and opinion. Other dangers stem from an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination, the panel writes. This is a particular concern in areas like law enforcement, where crime prediction systems have been shown to adversely affect communities of color, or in health care, where embedded racial bias in insurance algorithms can affect peoples access to appropriate care.

As the use of AI increases, these kinds of problems are likely to become more widespread. The good news, Littman says, is that the field is taking these dangers seriously and actively seeking input from experts in psychology, public policy and other fields to explore ways of mitigating them. The makeup of the panel that produced the report reflects the widening perspective coming to the field, Littman says.

The panel consists of almost half social scientists and half computer science people, and I was very pleasantly surprised at how deep the knowledge about AI is among the social scientists, Littman said. We now have people who do work in a wide variety of different areas who are rightly considered AI experts. Thats a positive trend.

Moving forward, the panel concludes that governments, academia and industry will need to play expanded roles in making sure AI evolves to serve the greater good.

Read the original post:

New report assesses progress and risks of artificial intelligence - Brown University

Related Posts