Robots are not immune to bias and injustice – Science

Abstract

Human-human social constructs drive human-robot interactions; robotics is thus intertwined with issues surrounding inequity and racial injustices.

Most roboticists focus on the design of intelligent machines with the goal of positively affecting the world, i.e., building robots in service to humanity. To this end, roboticists should embrace the concept in which our robot systems are explicitly designed to work with uniformly positive performance across the diversity of users. Unfortunately, researchers have shown that this is not always the case. Object detection systems, of the kinds used in autonomous vehicles, have uniformly poorer performance when it comes to detecting pedestrians with darker skin tones (1). Researchers have also shown that racial bias exists in commercial facial recognition application programming interfaces or APIs (2).

It is not just the responsibility of society or governing bodies to take on the challenge of fixing racial bias and inequity. Roboticists also need to take on the responsibility to make sure we do not cause equivalent harm in developing new technologies. And if the harm we are creating is negatively affecting one group or groups more than another, it is our responsibility to fix that. After all, roboticists are pretty skilled at finding solutions to hard, seemingly unsolvable problems. It is time to apply those skills to fix this one.

We propose that developers should consider the ethical implications of robotic usagenamely, ethical use and equity in performanceespecially when robot use could result in harm to any group. We define ethical use as the process for weighing the potential benefits against the possible risk of harm to all affected groups; only when this weighting factor is positive and sufficiently mitigates harm should deployment of the technology be considered. We define equity in performance as a metric to determine to what extent a deployed technologys performance is uncorrelated with a groups protected characteristics (race, ethnicity, age, gender, sex, etc.). If there is lack of equity in performance, then the implications deploying such technology should be carefully considered as well as the reliance of the technology.

We believe that an important step in addressing equity in performance as well as ethical use is to ensure more diverse teams are the creators of these technologies and to understand how to draw on their diverse backgrounds for team success (3); the very practical consequence of this concept is that diverse backgrounds will allow use and implications of the technology to be seen from unique perspectives, increasing the chances for equity and ethics. Diverse teams can also lead to better performancethis fact has been shown time and time again (4). Thus, to begin the process of addressing this problem, a new organization was founded: Black in Robotics (BiR) (www.blackinrobotics.org). BiR is an organization that was born to address the systemic inequities found in our robotics community by focusing on three primary pillarscommunity, advocacy, and accountability.

This can be defined as a sense of fellowship with those that share similar characteristics and goals and has been shown to directly correlate with success in STEM higher education for underrepresented minorities (5). As discussed in (6), while there are no U.S. statistics collected specifically about the demographics of the robotics workforce, we can examine the engineering workforce statistics as an indicative metric. In 2018, 12.7% of the U.S. population was Black or African American (7), but only 4.2% of bachelors degrees in engineering went to Black scholars (8). This issue of lack of diversity is also found in the tightly integrated field of artificial intelligence (AI), particularly when it comes to algorithm design and testing for AI systems that affect diverse populations (9). BiR plans to build community through networking and mentorship. We believe that establishing community is the first step to increasing the presence of Black and other diverse groups in the field of robotics.

For robotics, advocacy is defined as explicit action that supports or defends equity in performance as well as ethical use on the behalf of all, with a focus on ensuring equal outcomes across diverse communities. BiRs contribution toward the goal of advocacy is to showcase Black excellence in our community and to help connect academia and industry to the talent found in diverse communities. One such activity is the Black in Robotics Reading List, with objectives to provide academic role models for aspiring researchers and to normalize Black scholarship (6).

Our pillar of accountability is to design pathways for all roboticists, including allies, to participate in the solution. Just as being Black does not exclude those that identify as Black from being discriminated against based on their skin color, not identifying as Black should not exclude ones involvement in dismantling issues around robotics and race. For accountability, BiR seeks to function as the conduit to engage communities, to identify best practices, and to hold all of us accountable for making the robots that we design and deploy usable for all groups and communities.

We hope that the BiR organization inspires individuals to increase diversity in their spaces. We believe that this diversity is crucial for us to answer the next big questions for robotics as we integrate them more into our daily lives. Therefore, our mission is a call to action for the entire robotics community to increase diversity and to build with thoughtfulness for disadvantaged groups.

Complicity through silence is not an option.

M. Wang, W. Deng, J. Hu, X. Tao, Y. Huang, Racial faces in the wild: Reducing racial bias by information maximization adaptation network, in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October to 2 November 2019.

Original post:

Robots are not immune to bias and injustice - Science

Related Posts

Comments are closed.