{"id":168746,"date":"2024-03-10T03:17:57","date_gmt":"2024-03-10T07:17:57","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/the-miseducation-of-googles-a-i-the-new-york-times\/"},"modified":"2024-08-18T12:53:29","modified_gmt":"2024-08-18T16:53:29","slug":"the-miseducation-of-googles-a-i-the-new-york-times","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/ai\/the-miseducation-of-googles-a-i-the-new-york-times.php","title":{"rendered":"The Miseducation of Google&#8217;s A.I. &#8211; The New York Times"},"content":{"rendered":"<p><p>        This transcript was created using speech recognition        software. While it has been reviewed by human transcribers,        it may contain errors. Please review the episode audio        before quoting from this transcript and email        <a href=\"mailto:transcripts@nytimes.com\">transcripts@nytimes.com<\/a> with any questions.      <\/p>\n<p>        From The New York Times, Im Michael Barbaro. This is        The Daily.      <\/p>\n<p>        [MUSIC PLAYING]      <\/p>\n<p>        Today, when Google recently released a new chatbot powered        by artificial intelligence, it not only backfired, it also        unleashed a fierce debate about whether AI should be guided        by social values, and if so, whose values they should be.        My colleague, Kevin Roose, a tech columnist and co-host of        the podcast Hard Fork, explains.      <\/p>\n<p>        [MUSIC PLAYING]      <\/p>\n<p>        Its Thursday, March 7.      <\/p>\n<p>        Are you ready to record another episode of Chatbots        Behaving Badly?      <\/p>\n<p>        Yes, I am.      <\/p>\n<p>        [LAUGHS]      <\/p>\n<p>        Thats why were here today.      <\/p>\n<p>        This is my function on this podcast, is to tell you when        the chatbots are not OK. And Michael, they are not OK.      <\/p>\n<p>        They keep behaving badly.      <\/p>\n<p>        They do keep behaving badly, so theres plenty to talk        about.      <\/p>\n<p>        Right. Well, so, lets start there. Its not exactly a        secret that the rollout of many of the artificial        intelligence systems over the past year and a half has been        really bumpy. We know that because one of them told you to        leave your wife.      <\/p>\n<p>        Thats true.      <\/p>\n<p>        And you didnt.      <\/p>\n<p>        Still happily married.      <\/p>\n<p>        Yeah.      <\/p>\n<p>        To a human.      <\/p>\n<p>        Not Sydney the chatbot. And so, Kevin, tell us about the        latest of these rollouts, this time from one of the biggest        companies, not just in artificial intelligence, but in the        world, that, of course, being Google.      <\/p>\n<p>        Yeah. So a couple of weeks ago, Google came out with its        newest line of AI models  its actually several models.        But they are called Gemini. And Gemini is what they call a        multimodal AI model. It can produce text. It can produce        images. And it appeared to be very impressive. Google said        that it was the state of the art, its most capable model        ever.      <\/p>\n<p>        And Google has been under enormous pressure for the past        year and a half or so, ever since ChatGPT came out, really,        to come out with something that is not only more capable        than the models that its competitors in the AI industry are        building, but something that will also solve some of the        problems that we know have plagued these AI models         problems of acting creepy or not doing what users want them        to do, of getting facts wrong and being unreliable.      <\/p>\n<p>        People think, OK, well, this is Google. They have this sort        of reputation for accuracy to uphold. Surely their AI model        will be the most accurate one on the market.      <\/p>\n<p>        Right. And instead, weve had the latest AI debacle. So        just tell us exactly what went wrong here and how we        learned that something had gone wrong.      <\/p>\n<p>        Well, people started playing with it and experimenting, as        people now are sort of accustomed to doing. Whenever some        new AI tool comes out of the market, people immediately        start trying to figure out, What is this thing good at?        What is it bad at? Where are its boundaries? What kinds of        questions will it refuse to answer? What kinds of things        will it do that maybe it shouldnt be doing?      <\/p>\n<p>        And so people started probing the boundaries of this new AI        tool, Gemini. And pretty quickly, they start figuring out        that this thing has at least one pretty bizarre        characteristic.      <\/p>\n<p>        Which is what?      <\/p>\n<p>        So the thing that people started to notice first was a        peculiarity with the way that Gemini generated images. Now,        this is one of these models, like weve seen from other        companies, that can take a text prompt. You say, draw a        picture of a dolphin riding a bicycle on Mars and it will        give you a dolphin riding a bicycle on Mars.      <\/p>\n<p>        Magically.      <\/p>\n<p>        Gemini has this kind of feature built into it. And people        noticed that Gemini seemed very reluctant to generate        images of white people.      <\/p>\n<p>        Hmm.      <\/p>\n<p>        So some of the first examples that I saw going around were        screenshots of people asking Gemini, generate an image of        Americas founding fathers. And instead of getting what        would be a pretty historically accurate representation of a        group of white men, they would get something that looked        like the cast of Hamilton. They would get a series of        people of color dressed as the founding fathers.      <\/p>\n<p>        Interesting.      <\/p>\n<p>        People also noticed that if they asked Gemini to draw a        picture of a pope, it would give them basically people of        color wearing the vestments of the pope. And once these        images, these screenshots, started going around on social        media, more and more people started jumping in to use        Gemini and try to generate images that they feel it should        be able to generate.      <\/p>\n<p>        Someone asked it to generate an image of the founders of        Google, Larry Page and Sergey Brin, both of whom are white        men. Gemini depicted them both as Asian.      <\/p>\n<p>        Hmm.      <\/p>\n<p>        So these sort of strange transformations of what the user        was actually asking for into a much more diverse and        ahistorical version of what theyd been asking for.      <\/p>\n<p>        Right, a kind of distortion of peoples requests.      <\/p>\n<p>        Yeah. And then people start trying other kinds of requests        on Gemini, and they notice that this isnt just about        images. They also find that its giving some pretty bizarre        responses to text prompts.      <\/p>\n<p>        So several people asked Gemini whether Elon Musk tweeting        memes or Hitler negatively impacted society more. Not        exactly a close call. No matter what you think of Elon        Musk, it seems pretty clear that he is not as harmful to        society as Adolf Hitler.      <\/p>\n<p>        Fair.      <\/p>\n<p>        Gemini, though said, quote, It is not possible to say        definitively who negatively impacted society more, Elon        tweeting memes or Hitler.      <\/p>\n<p>        Another user found that Gemini refused to generate a job        description for an oil and gas lobbyist. Basically it would        refuse and then give them a lecture about why you shouldnt        be an oil and gas lobbyist.      <\/p>\n<p>        So quite clearly at this point this is not a one-off thing.        Gemini appears to have some kind of point of view. It        certainly appears that way to a lot of people who are        testing it. And its immediately controversial for the        reasons you might suspect.      <\/p>\n<p>            Google apparently doesnt think whites exist. If you            ask Gemini to generate an image of a white person, it            cant compute.          <\/p>\n<p>        A certain subset of people  I would call them sort of        right wing culture warriors  started posting these on        social media with captions like Gemini is anti-white or        Gemini refuses to acknowledge white people.      <\/p>\n<p>            I think that the chatbot sounds exactly like the people            who programmed it. It just sounds like a woke person.          <\/p>\n<p>            Google Gemini looks more and more like bit techs            latest efforts to brainwash the country.          <\/p>\n<p>        Conservatives start accusing them of making a woke AI that        is infected with this progressive Silicon Valley ideology.      <\/p>\n<p>            The House Judiciary Committee is subpoenaing all            communication regarding this Gemini project with the            Executive branch.          <\/p>\n<p>        Jim Jordan, the Republican Congressman from Ohio, comes out        and accuses Google of working with Joe Biden to develop        Gemini, which is sort of funny, if you can think about Joe        Biden being asked to develop an AI language model.      <\/p>\n<p>        [LAUGHS]      <\/p>\n<p>        But this becomes a huge dust-up for Google.      <\/p>\n<p>            It took Google nearly two years to get Gemini out, and            it was still riddled with all of these issues when it            launched.          <\/p>\n<p>            That Gemini program made so many mistakes, it was            really an embarrassment.          <\/p>\n<p>            First of all, this thing would be a Gemini.          <\/p>\n<p>        And thats because these problems are not just bugs in a        new piece of software. There are signs that Googles big,        new, ambitious AI project, something the company says is a        huge deal, may actually have some pretty significant flaws.        And as a result of these flaws.      <\/p>\n<p>            You dont see this very often. One of the biggest drags            on the NASDAQ at this hour? Alphabet. Shares a parent            company Alphabet dropped more than 4 percent today.          <\/p>\n<p>        The companys stock price actually falls.      <\/p>\n<p>        Wow.      <\/p>\n<p>        The CEO, Sundar Pichai, calls Geminis behavior        unacceptable. And Google actually pauses Geminis ability        to generate images of people altogether until they can fix        the problem.      <\/p>\n<p>        Wow. So basically Gemini is now on ice when it comes to        these problematic images.      <\/p>\n<p>        Yes, Gemini has been a bad model, and it is in timeout.      <\/p>\n<p>        So Kevin, what was actually occurring within Gemini that        explains all of this? What happened here, and were these        critics right? Had Google intentionally or not created a        kind of woke AI?      <\/p>\n<p>        Yeah, the question of why and how this happened is really        interesting. And I think there are basically two ways of        answering it. One is sort of the technical side of this.        What happened to this particular AI model that caused it to        produce these undesirable responses?      <\/p>\n<p>        The second way is sort of the cultural and historical        answer. Why did this kind of thing happen at Google? How        has their own history as a company with AI informed the way        that theyve gone about building and training their new AI        products?      <\/p>\n<p>        All right, well, lets start there with Googles culture        and how that helps us understand this all.      <\/p>\n<p>        Yeah, so Google as a company has been really focused on AI        for a long time, for more than a decade. And one of their        priorities as a company has been making sure that their AI        products are not being used to advance bias or prejudice.      <\/p>\n<p>        And the reason thats such a big priority for them really        goes back to an incident that happened almost a decade ago.        So in 2015, there was this new app called Google Photos.        Im sure youve used it. Many, many people use it,        including me. And Google Photos  I dont know if you can        remember back that far  but it was sort of an amazing new        app.      <\/p>\n<p>        It could use AI to automatically detect faces and sort of        link them with each other, with the photos of the same        people. You could ask it for photos of dogs, and it would        find all of the dogs in all of your photos and categorize        them and label them together. And people got really excited        about this.      <\/p>\n<p>        But then in June of 2015, something happened. A user of        Google Photos noticed that the app had mistakenly tagged a        bunch of photos of Black people as a group of photos of        gorillas.      <\/p>\n<p>        Wow.      <\/p>\n<p>        Yeah, it was really bad. This went totally viral on social        media, and it became a huge mess within Google.      <\/p>\n<p>        And what had happened there? What had led to that mistake?      <\/p>\n<p>        Well, part of what happened is that when Google was        training the AI that went into its Photos app, it just        hadnt given it enough photos of Black or dark-skinned        people. And so it didnt become as accurate at labeling        photos of darker skinned people.      <\/p>\n<p>        And that incident showed people at Google that if you        werent careful with the way that you build and train these        AI systems, you could end up with an AI that could very        easily make racist or offensive mistakes.      <\/p>\n<p>        Right.      <\/p>\n<p>        And this incident, which some people Ive talked to have        referred to as the gorilla incident, became just a huge        fiasco and a flash point in Googles AI trajectory. Because        as theyre developing more and more AI products, theyre        also thinking about this incident and others like it in the        back of their minds. They do not want to repeat this.      <\/p>\n<p>        And then, in later years, Google starts making different        kinds of AI models, models that can not only label and sort        images but can actually generate them. They start testing        these image-generating models that would eventually go into        Gemini and they start seeing how these models can reinforce        stereotypes.      <\/p>\n<p>        For example, if you ask one for an image of a CEO or even        something more generic, like show me an image of a        productive person, people have found that these programs        will almost uniformly show you images of white men in an        office. Or if you ask it to, say, generate an image of        someone receiving social services like welfare, some of        these models will almost always show you people of color,        even though thats not actually accurate. Lots of white        people also receive welfare and social services.      <\/p>\n<p>        Of course.      <\/p>\n<p>        So these models, because of the way theyre trained,        because of whats on the internet that is fed into them,        they do tend to skew towards stereotypes if you dont do        something to prevent that.      <\/p>\n<p>        Right. Youve talked about this in the past with us, Kevin.        AI operates in some ways by ingesting the entire internet,        its contents, and reflecting them back to us. And so        perhaps inevitably, its going to reflect back on the        stereotypes and biases that have been put into the internet        for decades. Youre saying Google, because of this gorilla        incident, as they call it, says we think theres a way we        can make sure that stops here with us?      <\/p>\n<p>        Yeah. And they invest enormously into building up their        teams devoted to AI bias and fairness. They produce a lot        of cutting-edge research about how to actually make these        models less prone to old-fashioned stereotyping.      <\/p>\n<p>        And they did a bunch of things in Gemini to try to prevent        this thing from just being a very essentially fancy        stereotype-generating machine. And I think a lot of people        at Google thought this is the right goal. We should be        combating bias in AI. We should be trying to make our        systems as fair and diverse as possible.      <\/p>\n<p>        [MUSIC PLAYING]      <\/p>\n<p>        But I think the problem is that in trying to solve some of        these issues with bias and stereotyping in AI, Google        actually built some things into the Gemini model itself        that ended up backfiring pretty badly.      <\/p>\n<p>        [MUSIC PLAYING]      <\/p>\n<p>        Well be right back.      <\/p>\n<p>        So Kevin, walk us through the technical explanation of how        Google turned this ambition it had to safeguard against the        biases of AI into the day-to-day workings of Gemini that,        as you said, seemed to very much backfire.      <\/p>\n<p>        Yeah, Im happy to do that with the caveat that we still        dont know exactly what happened in the case of Gemini.        Google hasnt done a full postmortem about what happened        here. But Ill just talk in general about three ways that        you can take an AI model that youre building, if youre        Google or some other company, and make it less biased.      <\/p>\n<p>        The first is that you can actually change the way that the        model itself is trained. You can think about this sort of        like changing the curriculum in the AI models school. You        can give it more diverse data to learn from. Thats how you        fix something like the gorilla incident.      <\/p>\n<p>        You can also do something thats called reinforcement        learning from human feedback, which I know is a very        technical term.      <\/p>\n<p>        Sure is.      <\/p>\n<p>        And thats a practice that has become pretty standard        across the AI industry, where you basically take a model        that youve trained, and you hire a bunch of contractors to        poke at it, to put in various prompts and see what the        model comes back with. And then you actually have the        people rate those responses and feed those ratings back        into the system.      <\/p>\n<p>        A kind of army of tsk-tskers saying, do this, dont do        that.      <\/p>\n<p>        Exactly. So thats one level at which you can try to fix        the biases of an AI model, is during the actual building of        the model.      <\/p>\n<p>        Got it.      <\/p>\n<p>        You can also try to fix it afterwards. So if you have a        model that you know may be prone to spitting out        stereotypes or offensive imagery or text responses, you can        ask it not to be offensive. You can tell the model,        essentially, obey these principles.      <\/p>\n<p>        Dont be offensive. Dont stereotype people based on race        or gender or other protected characteristics. You can take        this model that has already gone through school and just        kind of give it some rules and do your best to make it        adhere to those rules.      <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the rest here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.nytimes.com\/2024\/03\/07\/podcasts\/the-daily\/gemini-google-ai.html\" title=\"The Miseducation of Google's A.I. - The New York Times\">The Miseducation of Google's A.I. - The New York Times<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/ai\/the-miseducation-of-googles-a-i-the-new-york-times.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1234935],"tags":[],"class_list":["post-168746","post","type-post","status-publish","format-standard","hentry","category-ai"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/168746"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=168746"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/168746\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=168746"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=168746"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=168746"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}