Inside the MIT camp teaching kids to spot bias in code – Popular Science

Posted: November 27, 2021 at 5:00 am

This story originally appeared in the Youth issue of Popular Science. Current subscribers can access the whole digital editionhere, orclick hereto subscribe.

Li Xin Zhangs summer camp began with sandwichesnot eating them but designing them. The rising seventh grader listened as teachers asked her and her peers to write instructions for building the ideal peanut butter, jelly, and bread concoction. Heads down, the students each created their own how-to.

When they returned to the Zoom matrix of digital faces and told one another about their constructions, they realized something: Each of them had made a slightly different sandwich, favoring the characteristics they held dear. Not necessarily good, not necessarily bad, but definitely not neutral. Their sandwiches were biased. Because they were biased, and they had built the recipe.

The activity was called Best PB&J Algorithm, and Zhang and more than 30 other Boston-area kids between the ages of 10 and 15 were embarking on a two-week initiation into artificial intelligencethe ability of machines to display smarts typically associated with the human brain. Over the course of 18 lessons, they would focus on the ethics embedded in the algorithms that snake through their lives, influencing their entertainment, their social lives, and, to a large degree, their view of the world. Also, in this case, their sandwiches.

Everybodys version of best is different, says Daniella DiPaola, a graduate student at Massachusetts Institute of Technology who helped develop the series of lessons, which is called Everyday AI. Some can be the most sugary, or theyre optimizing for an allergy, or they dont want crust. Zhang put her food in the oven for a warm snack. A parents code might take cost into account.

A pricey PB&J is low on the worlds list of concerns. But given a familiar, nutrient-rich example, the campers could squint at bias and discern how it might creep into other algorithms. Take, for example, facial recognition software, which Boston banned in 2020: This code, which the citys police department potentially could have deployed, matches anyone caught on camera to databases of known faces. But such software in general is notoriously inaccurate at identifying people of color and performs worse on womens faces than on mensboth of which lead to false matches. A 2019 study by the National Institute of Standards and Technology used 189 algorithms from 99 developers to analyze images of 8.49 million people worldwide. The report found that false positives were uniformly more common for women and up to 100 times more likely among West and East African and East Asian people than among Eastern Europeans, which had the lowest rate. Looking at a domestic database of mug shots, the rate was highest for American Indians and elevated for Black and Asian populations.

The kids algorithms showed how preference creeps in, even in benign ways. Our values are embedded in our peanut butter and jelly sandwiches, DiPaola says.

The camp doesnt aim to depress students with the realization that AI isnt all-knowing and neutral. Instead, it gives them the tools to understand, and perhaps change, the technologys influenceas the AI creators, consumers, voters, and regulators of the future.

To accomplish that, instructors based their lessons on an initiative called DAILy (Developing AI Literacy), shaped over the past few years by MIT educators, grad students, and researchers, including DiPaola. It introduces middle schoolers to the technical, creative, and ethical implications of AI, taking them from building PB&Js to totally redesigning YouTubes recommendation algorithm. For the project, MIT partnered with an organization called STEAM Ahead, a nonprofit whose mission is to create educational opportunities for Boston-area kids from groups traditionally underrepresented in scientific, technical, and artistic fields. They did a trial run in 2020, then repeated the curriculum in 2021 for Everyday AI, expanding the camp to include middle-school teachers. The goal is for educators across the country to be able to easily download the course and implement it.

DAILy is designed to enable average people to be better informed about AI. I knew that AI was pretty helpful for humans, and it might be a huge part of our life, Zhang says, reflecting on what shed learned. When she started, she says, I just knew a little bit, not a lot. Coding was totally new to her.

DAILys creators and instructors are at the forefront of a movement to bake ethics into the development process, as opposed to its being an afterthought once the code is complete. The program isnt unique, though others like it are hardly widespread. Grassroots efforts range from a middle-school ethics offering in Indiana called AI Goes Rural to the website Explore AI Ethics, started for teachers by a Minnesota programmer. The National Science Foundation (NSF) recently funded a high-school program called TechHive AI that covers cybersecurity and AI ethics.

[Related: An AI finished Beethovens last symphony. Is it any good?]

Historically, ethics hasnt been incorporated into technical AI education. Its something that has been lacking, says Fred Martin, professor and associate dean for teaching, learning and undergraduate studies at the University of Massachusetts Lowell. In 2018, Martin co-founded the AI4K12 initiative, which produced guidelines for teaching AI in K12 schools. We conceived of what we call five big ideas of AI, and the fifth is all about ethics. Hes since seen AI ethics education expand and reach younger students, as evidenced by AI4K12s growing database of resources.

The directory links to MIT offerings, including DAILy. Ethics is front and center in their work, Martin says. Its important that kids begin learning about it early so they can be informed citizens.

At the Everyday AI workshop, the hope is that students will feel empowered. You do have agency, says Wesley Davis, a instructor at the 2020 pilot camp. You have the agency to understand. You have the agency to explore that curiosity, down to creating a better system, creating a better world.

Thats a little flowery-philosophical, he laughs. But that peculiar mix of idealism and cynicism is the specialty of teenagers. And so when asked if she thought she could, someday, make AI better than todays, Zhang gave a resounding Maybe.

DAILy began as a way to right a wrong. Blakeley Payne (ne Hoffman), a computer science major at the University of South Carolina, was hanging out in 2015 with her best friend, who had just applied for a job at Twitter. The rejection came back in a blink. How could the company possibly have decided so quickly that she wasnt a good fit? They posited that perhaps an algorithm had made the decision based on specific keywords. Mad, Payne began reading up on research about bias in, and the resulting inequities caused by, AI.

Since Paynes experience, AI partiality in hiring has become a famously huge problem. Amazon, for instance, made headlines in 2018 when Reuters reported that the companys recruitment engine discriminated against womenknocking out rsums with that keyword (as in womens chess club captain) and penalizing applicants for having gone to womens colleges. Turns out developers had trained their algorithm using rsums submitted to the company over a 10-year period, according to Reuters, most of which had come from men. A 2021 paper in International Journal of Selection and Assessment found that people largely rate a humans hiring judgment as more fair than an algorithms, though they often perceive automation to be more consistent.

At first, the whole situation soured Payne on her field. Ultimately, though, she decided to try to improve the situation. When she graduated in 2017, she enrolled at MIT as a graduate student to focus on AI ethics and the demographic where education could make the most difference: middle-school students. Kids this age are often labeled AI natives. Theyve never not known the tech, are old enough to consider its complications, and will grow up to make the next versions.

Over the next couple of years, Payne developed one of the first AI ethics curricula for middle graders, and her masters thesis helped inform another set of interactive lessons, called How to Train Your Robot. When she graduated in 2020 and went on to do research for the University of Colorado, Boulder, MIT scholars like DiPaola continued and expanded her efforts.

[Related: Do we trust robots enough to put them in charge?]

Paynes projects helped lay the groundwork for the larger-scale DAILy program, funded by the NSF in March 2020. DAILy is a collaboration among the MIT Scheller Teacher Education Program (STEP), Boston College, and the Personal Robots Group at the MIT Media Lab, an interdisciplinary center where DiPaola works. A second NSF grant, in March 2021, funds a training program to help teachers use DAILy in their classrooms. By forging partnerships with districts in Florida, Illinois, New Mexico, and Virginia and with youth-education nonprofits like STEAM Ahead, the MIT educators are able to see how their ivory-tower lessons play out. The proving ground for any curriculum is in the real classroom and in summer camps, says DiPaola.

When those kidsand many adults, eventhink of AI, one thing usually comes to mind: robots. Robots from the future, killer robots that will take over the world, superintelligence, says DiPaola. It was a big shock to them that AI is actually in the technologies they use every single day.

Teachers have often told the STEP Labs Irene Lee, who oversees the grants, that they didnt realize AI was being deployed. They thought it was an abstraction in labs. Deployed?! Lee says to them. Youre immersed in it!

Its in smart speakers. It recommends a Netflix film to chill to. It suggests new shoes. It helps give the yea or nay on bank loans. Companies weed out job applicants with it; schools use it to grade papers. Perhaps most importantly to the summer-camp students, it powers apps like TikTok and whatever meme-bending video the platform surfaces.

They know that when theyre looking at cat-mischief TikToks, theyll get recommendations for similar ones, and that their infinite scroll of videos is different from their friends. But they dont usually realize that those results are AIs doing. I didnt know all these facts, says Zhang.

Soham Patil, one of her camp-mates, agrees. A rising eighth grader, hed been studying how AI works and writing software recreationally for a few months before the program. I kind of knew how to code, but I didnt really know the practical uses of AI, Patil says. I knew how to use it but not what its for.

Patil, Zhang, and their peers next activity involved a different food group: noodles. They saw on their screens a member of a strange royal familya cat wearing a tiara, with hearts for eyes.

There is a land of pasta known for most excellent cuisine with a queen who wants to classify all the dry pasta in her land and store them in bins, reads the lesson. YOU, as a subject in PastaLand, are tasked with building a classification system that can be used to describe and classify the pasta so the pasta can easily be found when the queen wants a certain dish.

Ethics of monarchy aside, the students goal was to develop an identification system called a decision tree, which arrives at classification by using a series of questions to sort objects based on their characteristics, first into two groups, then each of those into two more groups, then each of those into two more, until there is only one kind of object left in each group. For pasta, STEP Labs Lee explains, The first question could be, Is it long? Is it curly? Does it have ridges? Is it a tube? Zhangs team started with Is it round? Is it long? and Is it short?

As before, though, when the kids reassembled, they realized their questions were all different: Some might ask whether a piece of pasta can hold a lot of sauce or only a little. Another might separate types based on whether theyre meant to be stuffed or not. Patil noticed that some kids would try to separate the unclassified pasta into two roughly equal groups at every juncture.

Could someone who is blind follow their key? the teachers asked. What about the subjectivity in simply determining what long is? Even pasta was influenced by culture, experience, and ability. The students then extended this realizationthat its easy to bake in bias, exclude people, or misread your opinions as objectiveto higher-stakes situations. Predictive policing is an example. The technology uses past crime data to forecast which areas are high risk or who is purportedly most likely to offend. But any AI that uses legacy data to predict the future is liable to reinforce past prejudices. A 2019 New York University Law Review paper looked at case studies in Illinois, Arizona, and Louisiana and noted that a failure to reform such systems risks creating lasting consequences that will permeate throughout the criminal justice system and society more widely.

[Related: How Googles newest tool could change how you search online]

The students could see, again, how AI-based choices affect outputs. They can know, If I design it this way, these people will be impacted positively, these people will be impacted negatively, says DiPaola. They can ask themselves, How do I make sure the most vulnerable people are not harmed?

AI developers find themselves grappling with these questions more frequently, in part because their work now touches so many aspects of peoples lives. The biases in their code are largely societys own. Take recommendation algorithms like YouTubes, which former Google developer Guillaume Chaslot asserts drive viewers toward more sensationalistic, more divisive, often misinformational videosto keep more people watching longer and attract advertising. Such a choice arguably favors profits over impartiality.

By teaching kids early what ethical AI looks like, how unfairness gets in there, and how to work around it, educators hope to enable them to recognize that unfairness when it occurs and devise strategies to correct the problem. Ethics has been taught either as a completely separate course or in the last two or three lessons of a semester course, says DiPaola. That, she says, conveys an implicit lesson: Ethics doesnt need to be thought of at the same time as youre actually building something, or ethics is kind of an afterthought.

Better integration of ethics is important to Denise Dreher, a database programmer who recently retired from the IT department of St. Paul, Minnesotas Macalester College. As a personal project, she has been cataloging curricula like DAILy and making the K-12 lessons available on her website, Explore AI Ethics, for teachers to use in the classroom. She believes that AI education should look more like engineering instruction. Theres a long and very good tradition of safety and ethics for engineer training, she says, because its a profession, one with a codified career path. You cant just go build a bridge, or get through bridge-building school without having to work through the implications of your bridge.

AI? she continues. Any 10-year-old in your basement can do it.

As camp progressed, the ethical questions grew bigger, as did the technology that students dealt with. One day, Mark ZuckerbergCEO of Facebook, a social network largely populated by oldsappeared on their screens. I wish I could keep telling you that our mission in life is connecting people, but it isnt, Zuckerberg said. We just want to predict your future behaviors. The more you express yourself, the more we own you.

That would be an unusually candid speech. And, actually, the whole thing looked a little off. Zuckerbergs eyelids were a little blurrier than the rest of him. And he stared at the camera without blinking for longer than a normal person would. These, instructors pointed out, are tells.

He didnt look like a normal person because he wasnt a normal person. He wasnt even a real person. He was a deepfaked videomorph giving a deeply faked speech. A deepfake is footage or an image produced by an AI after it parses lots of footage or photos of someone. In this case, the software learned how Zuckerberg looks and sounds saying different words in different situations. With that material, it assembled a Zuck that doesnt exist, saying something he never said. Its kind of hard to think how AI could create a video, says Patil.

Zhang, whose preferred social medium is YouTube, watches a lot of videos and already assumed that not all of them are realbut didnt have any tools to parse truth from fiction till this course.

The campers had all likely encountered AI-based fakery before. An app called Reface, for example, lets them switch visages with another persona popular TikTok hobby. FaceTune conforms selfies to conventional European standards of beauty, bleaching teeth, slimming noses, pouting up lips. But they cant always tell when someone else has been tuned. They may just think that so-and-so just had a good complexion day.

In fake visual media, the real and syntheticthe human and the AIhave two faces that look nearly identical. When the kids fully grasp that, Its a moment where shit gets real, so to speak, says Gabi Souza, who worked at the camp both summers. They know that you cant trust everything you see, and thats important to know, especially in our world of so much falsehood so widely propagated. They at least know to question whats presented.

Not all lessons went over so well. There are a couple of activities that even in person would be scratching at the top level of comprehension, says instructor Davis. Patil, for instance, had a hard time understanding the details of neural networks, software inspired by the brains interconnected neurons. The goal of the code is to recognize patterns in a dataset and use those patterns to make predictions. In astronomy, for instance, such programs can learn to predict what type of galaxy is shining in a telescope picture. At camp, the kids acted like the nodes of a neural network to predict the caption for a photo of a squirrel water-skiing in a pool. It worked kind of like a game of telephone: Teachers showed the picture to several students, who wrote down keywords describing it, and then each passed a single word on to students who hadnt seen the image. Those kids each picked two words to pass to a final camper, who chose four words for the caption. For the nodes, understanding their role in that network, and transposing that onto software, was hard.

But even with the activities that didnt melt youthful brains, how well a lesson went depended on how many students had breakfast this morning, is it Monday or is it Thursday afternoon, says Davis. It wasnt all canoes and archery, like traditional camps. Its a lot of work, says Zhang.

Making AI education accessible, and diversely implemented, is more complicated than teaching it in person to private-school kids who get MacBook Pros. While the collaboration partners had always planned to make the curriculum virtual to make it more accessible, the pandemic sped up that timeline and highlighted where they needed to improve, like by making sure that the activities would work across different platforms and devices.

[Related: The Pentagons plan to make AI trustworthy]

Then there are complications with the Media Labs involvement. The organization came under fire in 2019 for taking money and ostensible cultural cachet from convicted sex offender Jeffrey Epstein, which led to the departure of the labs director. Writer Evgeny Morozov, who researches the social and political implications of technology, pointed out in the Guardian that the third culture promoted by organizations like the labwhere scientists and technologists represent societys foremost deep thinkersis a perfect shield for pursuing entrepreneurial activities under the banner of intellectualism. Perhaps you could apply that criticism to Personal Robots director Cynthia Breazeal, whose company garnered around $70 million in funding between 2014 and 2016 for a social robot named Jibo that would help usher in a new era of human-machine interaction. The story had an unhappy ending: delayed shipments, dissatisfied customers, layoffs, a sell-off of intellectual property, and no real revolution.

But those too are perhaps good lessons for students to learn while theyre young. Flashy, fancy things can disappoint in myriad ways, and even places that teach ethics early can nevertheless have lapses of their own. And maybe that shouldnt be so surprising: After all, the problems with AI are just human problems, de-personified.

The seamy undersilicon of AIits discrimination, its invasiveness, its deceptiondidnt, though, discourage campers from wanting to join the field, as both Zhang and Patil are considering.

And now they know that, more likely than not, no matter what job they apply for, an algorithm will help determine if theyre worthy of it. An algorithm that, someday, they might help rewrite.

This story originally appeared in the Youth issue of Popular Science. Current subscribers can access the whole digital editionhere, orclick hereto subscribe.

Read the original:

Inside the MIT camp teaching kids to spot bias in code - Popular Science

Related Posts