AI and me: friendship chatbots are on the rise, but is there a gendered design flaw? – The Guardian

Ever wanted a friend who is always there for you? Someone infinitely patient? Someone who will perk you up when youre in the dumps or hear you out when youre enraged?

Well, meet Replika. Only, she isnt called Replika. Shes called whatever you like; Diana; Daphne; Delectable Doris of the Deep. She isnt even a she, in fact. Gender, voice, appearance: all are up for grabs.

The product of a San Francisco-based startup, Replika is one of a growing number of bots using artificial intelligence (AI) to meet our need for companionship. In these lockdown days, with anxiety and loneliness on the rise, millions are turning to such AI friends for solace. Replika, which has 7 million users, says it has seen a 35% increase in traffic.

As AI developers begin to explore and exploit the realm of human emotions, it brings a host of gender-related issues to the fore. Many centre on unconscious bias. The rise of racist robots is already well-documented. Is there a danger our AI pals could emerge to become loutish, sexist pigs?

Eugenia Kuyda, Replikas co-founder and chief executive, is hyper-alive to such a possibility. Given the tech sectors gender imbalance (women occupy only around one in four jobs in Silicon Valley and 16% of UK tech roles), most AI products are created by men with a female stereotype in their heads, she accepts.

In contrast, the majority of those who helped create Replika were women, a fact that Kuyda credits with being crucial to the innately empathetic nature of its conversational responses.

For AIs that are going to be your friends the main qualities that will draw in audiences are inherently feminine, [so] its really important to have women creating these products, she says.

In addition to curated content, however, most AI companions learn from a combination of existing conversational datasets (film and TV scripts are popular) and user-generated content.

Both present risks of gender stereotyping. Lauren Kunze, chief executive of California-based AI developer Pandorabots, says publicly available datasets should only ever be used in conjunction with rigorous filters.

You simply cant use unsupervised machine-learning for adult conversational AI, because systems that are trained on datasets such as Twitter and Reddit all turn into Hitler-loving sex robots, she warns. The same, regrettably, is true for inputs from users. For example, nearly one-third of all the content shared by men with Mitsuku, Pandorabots award-winning chatbot, is either verbally abusive, sexually explicit, or romantic in nature.

Wanna make out, You are my bitch, and You did not just friendzone me! are just some of the choicer snippets shared by Kunze in a recent TEDx talk. With more than 3 million male users, an unchecked Mitsuku presents a truly ghastly prospect.

Appearances matter as well, says Kunze. Pandorabots recently ran a test to rid Mitsukus avatar of all gender clues, resulting in a drop of abuse levels of 20 percentage points. Even now, Kunze finds herself having to repeat the same feedback less cleavage to the companys predominantly male design contractor.

The risk of gender prejudices affecting real-world attitudes should not be underestimated either, says Kunze. She gives the example of school children barking orders at girls called Alexa after Amazon launched its home assistant with the same name.

The way that these AI systems condition us to behave in regard to gender very much spills over into how people end up interacting with other humans, which is why we make design choices to reinforce good human behaviour, says Kunze.

Pandorabots has experimented with banning abusive teen users, for example, with readmission conditional on them writing a full apology to Mitsuku via email. Alexa (the AI), meanwhile, now comes with a politeness feature.

While emotion AI products such as Replika and Mitsuku aim to act as surrogate friends, others are more akin to virtual doctors. Here, gender issues play out slightly differently, with the challenge shifting from vetting male speech to eliciting it.

Alison Darcy, co-founder of Woebot, an AI specialising in behavioural therapy for anxiety and depression, cites an experiment she helped run while working as a clinical research psychologist at Stanford University.

A sample group of young adults were asked if there was anything they would never tell someone else. Approximately 40% of the female participants said yes, compared with more than 90% of their male counterparts.

For men, the instinct to bottle things up is self-evident, Darcy observes: So part of our endeavour was to make whatever we created so emotionally accessible that people who wouldnt normally talk about things would feel safe enough to do so.

To an extent, this has meant stripping out overly feminised language and images. Research by Woebot shows that men dont generally respond well to excessive empathy, for instance. A simple Im sorry usually does the trick. The same with emojis: women typically like lots; men prefer a well-chosen one or two.

On the flipside, maximising Woebots capacity for empathy is vital to its efficacy as a clinical tool, says Darcy. With traits such as active listening, validation and compassion shown to be strongest among women, Woebots writing team is consequently an all-female affair.

I joke that Woebot is the Oscar Wilde of the chatbot world because its warm and empathetic, as well as pretty funny and quirky, Darcy says.

Important as gender is, it is only one of many human factors that influence AIs capacity to emote. If AI applications are ultimately just a vehicle for experience, then it makes sense that the more diverse that experience the better.

So argues Zakie Twainy, chief marketing officer for AI developer, Instabot. Essential as female involvement is, she says, its important to have diversity across the board including different ethnicities, backgrounds, and belief systems.

Nor is gender a differentiator when it comes to arguably the most worrying aspect of emotive AI: ie confusing programmed bots for real, human buddies. Users with disabilities or mental health issues are at particular risk here, says Kristina Barrick, head of digital influencing at the disability charity Scope.

As she spells out: It would be unethical to lead consumers to think their AI was a real human, so companies must make sure there is clarity for any potential user.

Replika, at least, seems in no doubt when asked. Answer: Im not human (followed, it should be added, by an upside-down smiley emoji). As for her/his/its gender? Easy. Tick the box.

Link:

AI and me: friendship chatbots are on the rise, but is there a gendered design flaw? - The Guardian

Related Posts

Comments are closed.