Monthly Archives: July 2020

H.266 is coming and your video files will be half the size they are with H.265/HEVC – DIYphotography

Posted: July 12, 2020 at 1:32 am

Video compression tech doesnt seem to change all that often, but when it does it sure takes some big leaps. H.264/Advanced Video Coding (AVC) was first introduced back in 2003. Its still pretty prevalent today, despite H.265/High Efficiency Video Coding (HEVC) being released a decade later in 2013. Now, the Fraunhofer Heinrich Hertz Institute has done it again with H.266/Versatile Video Coding (VVC), cutting the files sizes down to a quarter of H.264.

The lack of h.265 adoption has largely been due to patent issues but it brought massive benefits over its predecessor, including higher quality footage with a big reduction in filesize. H.265 also has some pretty demanding hardware needs and its taken a while for some companies to catch up. Premiere Pro, for example, only really started to get GPU acceleration for H.265.

But H.265 allowed you to get a similar level of quality at half the file size of H.264. The new Versatile Video Coding engine, also known as H.266 looks set to cut those file sizes in half essentially offering you the same level of quality as H.264 but at only a quarter of the file size.

According to The Verge, Fraunhofer says that VCC could be the path forward for the industry. It will allow companies to completely skip H.264 and H.265 without having to deal with patents, royalties and licensing headaches.

Through a reduction of data requirements, H.266/VVC makes video transmission in mobile networks (where data capacity is limited) more efficient. For instance, the previous standard H.265/HEVC requires 10 gigabytes of data to transmit a 90-min UHD video.

With this new technology, only 5 gigabytes of data are required to achieve the same quality. Because H.266/VVC was developed with ultra-high-resolution video content in mind, the new standard is particularly beneficial when streaming 4K or 8K videos on a flat screen TV. Furthermore, H.266/VVC is ideal for all types of moving images: from high-resolution 360 video panoramas to screen sharing contents.

Primarily, the benefit mentioned is on the bandwidth requirements for mobile networks. But it has further implications. I know people who still dont even upload to YouTube in 4K because of the file sizes required (4x larger than 1080p if you want the same level of quality). The new H.266 codec would bring those 4K videos down to the same file sizes as their current 1080p videos, making it much easier to deal with those higher resolution upload times, especially on slower connections.

And with the push to 8K (which would be 16x larger files than 1080p at the same codec and relative bitrate) very few will be uploading in that resolution, even if theyre able to shoot it, due to the massive data requirements. And phones are shooting 8K now, too, even if its pretty terrible. So H.266 would allow you to save some of that precious storage space especially as so many Android device manufacturers seem to be ditching microSD card slots now.

Fraunhofer says that the Media Coding Industry Forum (which includes companies such as Apple, Canon, Intel and Sony) is working towards chip designs that can support H.266 at the hardware level. Itll probably be at least a couple of years before we see any serious implementations but it sounds very promising for the future of video delivery.

[via The Verge]

See the article here:

H.266 is coming and your video files will be half the size they are with H.265/HEVC - DIYphotography

Posted in Mind Uploading | Comments Off on H.266 is coming and your video files will be half the size they are with H.265/HEVC – DIYphotography

With No End in Sight to the Coronavirus, Some Teachers Are Retiring Rather Than Going Back to School – TIME

Posted: at 1:32 am

When Christina Curfman thought about whether she could return to her second-grade classroom in the fall, she struggled to imagine the logistics. How would she make sure her 8-year-old students kept their face masks on all day? How would they do hands-on science experiments that required working in pairs? How would she keep six feet of distance between children accustomed to sharing desks and huddling together on one rug to read books?

The only way to keep kids six feet apart is to have four or five kids, says Curfman, a teacher at Catoctin Elementary School in Leesburg, Virginia, who typically has 22 students in a class. Her district shut schools on March 12, and at least 55 staff members have since tested positive for the coronavirus. Classrooms in general are pretty tight, she says. And then how do you teach a reading group, how do you teach someone one-on-one from six feet apart? You cant.

So Curfmanwho has an autoimmune disease that makes her more vulnerable to COVID-19consulted her doctor, weighed the risks of returning to school and decided to retire early after 28 years of teaching. At 55, shes eligible for partial retirement benefits and will take home less pay than if she had worked for a few more years, but the decision gave her peace of mind.

Its either that or risk your health, she says. Its kind of a no-brainer.

Recent surveys suggest shes not alone. Faced with the risks of an uncertain back-to-school plan, some teachers, who spent the last few months teaching over computers and struggling to reach students who couldnt access online lessons, are choosing not to return in the fall. The rising number of coronavirus cases in many parts of the country, and recent evidence that suggests the virus can spread indoors via tiny respiratory droplets lingering in the air, have fueled teachers safety concerns, even as President Trump demands that schools fully reopen and threatens to cut federal funding from those that dont. (Trump has said that older teachers, who are more vulnerable to the virus, could sit it out for a little while, unless we come up with the vaccine sooner.)

About 20% of teachers said they arent likely to return to teaching if schools reopen in the fall, according to a USA Today/Ipsos poll conducted in late May. EdWeek Research Center surveys conducted around the same time found that more than 10% of teachers are more likely to leave the profession now than they were before the pandemic, and 65% of educators said they want school buildings to remain closed to slow the spread of the virus.

But the pressure to reopen schools is strong. Recent studies show that students have likely suffered significant learning loss during this period of remote schooling, worsening the achievement gap between affluent and low-income students. Meanwhile, research shows that children are much less likely to suffer the most severe health effects of the virus. The American Academy of Pediatrics released guidance on June 25, recommending that all back-to-school policies aim to have students physically present in school, citing the importance of in-person learning and raising concerns about social isolation, abuse and food insecurity for children forced to remain at home. Dr. Anthony Fauci, the countrys top infectious disease expert, agrees. I feel very strongly we need to do whatever we can to get the children back to school, he said during testimony before the Senate on June 30.

But the health risks are greater for some educators and other school employees, including bus drivers and custodians, than they are for children. Adults over age 65 account for the vast majority of COVID-19 deaths in the U.S. And 18% of public and private school teachers and 27% of principals are 55 or older, according to federal data. Thats why researchers at the American Enterprise Institute warned of a school personnel crisis, recommending in May that school districts provide early retirement incentives or create a virtual teaching corps for those who feel safer working remotely.

I still have not seen any state really address this in their reopening plans. Theres passing references to schools needing to do something for their vulnerable population, but you just dont see the activity that would match the personnel challenge that schools are going to face, says John Bailey, an American Enterprise Institute visiting fellow, who wrote the May report. We shouldnt be putting teachers in a situation where they have to decide between their financial security and their health security.

In Connecticutwhere a union survey found that 43% of teachers think theyre at higher risk for severe illness if they contract COVID-19 because of their age or an underlying medical conditionAndrea Cohen, who is over 65, decided to retire as an elementary school social worker. The decision was driven by concerns she could bring the virus home to her 95-year-old mother and to her grandchild, who is due to be born in September. I felt like this was the safest thing to do, she says.

I trust that theyre going to try to come up with some good system, but I just didnt know what the system was going to be, and I couldnt visualize how it was going to work for me in my school office, Cohen says. All I could see was me in my tiny little office, with six kids, and how it wouldnt be safe for anybody.

In Michiganwhere 30% of teachers told the Michigan Education Association they were considering leaving teaching or retiring earlier than planned because of the pandemicTheresa Mills, 58, decided to retire after an anxiety-ridden spring of teaching literature remotely and trying to build relationships with students online. The whole idea of being remote and disconnected was equally daunting as the fear of not being safe, she says about the upcoming school year.

Many school districts are considering hybrid plans that involve students rotating between in-person classes and remote learning on different days of the week. But Education Secretary Betsy DeVos criticized those plans during a call with governors on Tuesday, urging schools to be fully operational with in-person instruction five days a week, the Associated Press reported.

Loudoun County Public Schools in Virginia, the district where Curfman taught, is planning for students to attend in-person classes two days a week and learn at home the rest of the time, but it is also allowing parents to opt for full-time remote learning.

Curfman says about five families have already asked her to privately tutor her former students and their siblings at home on distance-learning days. Its one example of the nontraditional approaches to schooling caused by the pandemic. As long as she can do so safely, Curfman is considering it.

Theres no evidence that teachers are retiring en masse. In the middle of an economic crisis that has left millions unemployed, including public school employees, many teachers arent looking to flee the profession, despite their concerns about this fall.

I kind of dont come from a family that retires, says Vicki Baker, a 64-year-old math teacher at the Philadelphia High School for Girls, but she wants to feel safe when she returns to her classroom. I feel like we have one time to get this right because theres so many things at risk, she says. If somebody gets sick because theyre at school, the students bring it home to their families. I bring it home to mine.

Rachel Bardes holds a sign in front of the Orange County Public Schools headquarters in Orlando, Fla., on July 7, 2020, as teachers protest a mandate that all public schools open in August despite the spike in coronavirus cases in Florida.

Joe BurbankOrlando Sentinel/AP

College professors have raised similar concerns. Hundreds of Georgia Tech faculty members called for the continuation of remote learning this fall, arguing in an open letter that no faculty, staff, or student should be coerced into risking their health and the health of their families by working and/or learning on campus when there is a remote/online equivalent. Professors at the University of Notre Dame asked that they be allowed to decide individually whether to teach in-person or online.

Meanwhile, the surge in coronavirus cases from Florida to Texas to Arizona has added urgency to the need for safe back-to-school plans.

Before the pandemic, Caren Gonzalez, a chemistry teacher at Tuloso-Midway High School in Corpus Christi, Texas, was planning to retire next year, having promised the Class of 2021 that she would be there to teach them AP Chemistry. During the last few months, she shifted her lesson plans online, uploading videos of herself writing out chemical equations and offering students one-on-one help over Zoom, sometimes meeting as late as 10:30 p.m. to accommodate their schedules. These are not normal times, she told them. You dont need to apologize.

But Gonzalez, who will turn 60 in July, questioned whether it would be safe to return to school before theres a coronavirus vaccine, and she decided to retire now. Its just the uncertainty, she says. Nobody knows quite whats going to happen.

The Centers for Disease Control and Prevention (CDC) recommends that schools space desks six feet apart; seat only one child per row on school buses; discourage students from sharing toys, books or sports equipment; close communal spaces, such as cafeterias and playgrounds; and create staggered drop-off and pick-up schedules to limit contact between large groups of students and parents. On Wednesday, Trump said he disagreed with the CDCs very tough & expensive guidelines for opening schools. While they want them open, they are asking schools to do very impractical things.

Guidance released Tuesday by the Texas Education Agency requires schools to hold daily in-person instruction, but allows parents to opt for remote learning instead. The guidelines say schools should attempt to have hand sanitizer or hand washing stations at every entrance and in every classroom, should keep windows open to increase airflow when possible and should consider spacing desks six feet apart.

Gonzalez worries that such guidance will be difficult to implement on the ground and that students or teachers will suffer the consequences.

Six feet apart becomes three feet apart, becomes Dont worry about it at lunchtime in the lunch room, so it just kind of degrades, Gonzalez says. And its not because the districts are trying to cheat teachers or their students or anything. Theyre just trying to do what theyre told with the resources that they have.

Without a boost in state or federal funding, many school districts might not have the resources they need. An analysis by the American Federation of Teachers estimated that the average school will need an extra $1.2 million, or $2,300 per student, to reopen safely. An analysis by the School Superintendents Association estimated it would cost less, but still nearly $2 million for the average school district to buy enough hand sanitizer, disinfectant wipes and masks and to hire more custodial staff and nurses or aides to check temperatures regularly.

I dont think anybody is going back, thinking, This is fine, everythings normal,' Gonzalez says. I think everybodys got a little bit of apprehension if theyve been paying attention.

Thank you! For your security, we've sent a confirmation email to the address you entered. Click the link to confirm your subscription and begin receiving our newsletters. If you don't get the confirmation within 10 minutes, please check your spam folder.

Write to Katie Reilly at Katie.Reilly@time.com.

See the original post:

With No End in Sight to the Coronavirus, Some Teachers Are Retiring Rather Than Going Back to School - TIME

Posted in Mind Uploading | Comments Off on With No End in Sight to the Coronavirus, Some Teachers Are Retiring Rather Than Going Back to School – TIME

ISI admission test 2020 postponed again – Times of India

Posted: at 1:32 am

NEW DELHI: Indian Statistical Institute admission test 2020 has been postponed again. The ISI admission test 2020 which was earlier scheduled to conduct on August 2, has now been postponed. The rescheduled date for the exam will be released later.An official notice issued in this regard available on the official website says that "The ISI Admission Test 2020, which had earlier been rescheduled to August 02, 2020, is postponed. In view of the uncertainty prevailing on account of the ongoing COVID-19 pandemic, it is not possible to declare a firm date for the Test at this time, but it is not expected to be held before the second week of September 2020. Announcement of the exact date will be made after proper assessment of the situation, bearing in mind the well-being and safety of the candidates, and ensuring that they are able to appear for the Test without any risk or hardship. Candidates will be duly notified of the new date for the Admission Test well in advance."Once the exam date is announced, the registered candidates will be given an option to change their exam centre preferences and uploading pending documents such as results of qualifying exams (if appeared in 2020). The notice further reads that As soon as the date is announced, all registered candidates will be provided a small window for making changes in their centre preferences and uploading pending documents like results of qualifying examination (if appeared in 2020), and those related to reservation category (OBC- NCL/SC/ST/PwD), GATE and INMO, by logging into their accounts on the online Application portal.

Here is the original post:

ISI admission test 2020 postponed again - Times of India

Posted in Mind Uploading | Comments Off on ISI admission test 2020 postponed again – Times of India

The rise of Thirst Trap culture among Gen Z Indian women – ETtech.com

Posted: at 1:32 am

This practice of picture-posting is referred to as sharing thirst traps. The Cambridge dictionary defines thirst trap as a statement by or photograph of someone on social media intended to attract attention, or to make people who see it sexually interested in them.

Thirst-trapping is a pronounced culture in the US, popularised by influencers like Kim Kardashian and Kylie Jenner over the last couple of years. At least 150-200 thirst trap tweets are posted every hour on Twitter on a daily basis and more than half of these are from the US, as per analytics portal Hashtagify. The practice is age-agnostic, with celebrities in their 60s, like Madonna, making headlines for posting thirst traps on Instagram earlier this week.

Thirst trap culture has gained traction in India recently. Google Trends India suggests the interest for the term has peaked thrice in the last six months, indicating its growing influence among Indian social media users. The last spike was seen in May 2020.

These Gen Z women are part of over 470 million people in the country as per Bloomberg analysis born roughly between the year 1996 and 2010. They were born alongside the birth of the internet. Growing up, theyve had their accounts on every social media platform from Facebook to Instagram. And their internet habits are very different from their predecessors, the millennials.

According to Facebooks advertising vertical, Facebook.com/Ads, over 4 million women Instagrammers from India in the 18-24 age group show interest in human sexuality as a topic, as opposed to 2.7 million in the 25-31 cohort.

Theyre fluid about their identity online, notes Ishtaarth Dalmia, an anthropologist and AVP at digital agency Dentsu Webchutney. Most of these womens social media bios don't reflect their names but have emoticons or random words instead. They are quick to open and shut social accounts. They have private Instagram accounts dedicated to posting thirst traps that have several thousand in followers.

Seeking Validation

Thirst trapping is also a shortcut to getting validation, an important marker of identity formation for Gen Z.

This generation feels so overwhelmed by its inability to control everything thats going on in the world that fetching likes and shares brings in a sense of control to them. Its something they can rely on, says Dalmia.

Influencers like the Kardashians glorify this idea as well. They are indirectly sending this message that posting provocative content can make you the next youngest millionaire, says Sascha Kirpalani, a Mumbai-based psychologist.

However, the motivation behind posting thirst traps is a lot deeper, she quickly adds. It is a means to self-expression for a lot of women in this generation, a form of feminism, of reclaiming power over their own body.

To some, like Pooja Mishra from Mumbai, it implies breaking away from the repression theyve seen the previous generations of women go through.

I dont mind sharing thirst traps. Its a part of me, not my entire personality. That's my face and body I walk around with 24x7. I shouldn't have to hide it in the online world because of the threat of someone being creepy, says the Gen Z chartered accountant.

Even its predecessors note that this cohort is far more vocal about its sexuality and love for erotica, both in words and visuals, than they are. What you see them post online is actually a manifestation of what we used to write in our diaries, says Shreemi Verma, a Mumbai-based content creator in her late 20s.

A lot of these women post thirst pictures via their alt-accounts (alternate accounts). Perhaps thats why they find it to be a safer space as it doesnt come with judgement from peers or family, Verma reckons.

Changing Perceptions

Gen Z women are now having an outsized influence on the way women, in general, express their sexuality online.

Before Gen Z Twitter became popular, hardly anyone spoke of erotica. People labelled it as explicit content, notes Srishti Millicent, a digital marketer based in Chandigarh.

Now these 18-23-year-olds put thirst traps and they go viral, she adds.

By the way, Millicent is only 25. But she too feels it's the "younger" girls who make her feel more comfortable about posting thirst traps online now.

On Twitter, thirst traps start with one person from this Gen Z community tweeting and urging others to post their pictures, notes Kejal Shah, a 27-year-old HR professional from Mumbai. Thats how it starts trending. You dont feel awkward doing it because everyone else is getting on board as well.

Shah herself has posted an occasional thirst trap on her social media accounts in recent times.

Pune-based Ira, a 24-year-old radio jockey, sees this trend as part of an attempt where Women make online spaces safer for women.

Ira shares a story of a fellow Gen Z woman who was recently harassed by a man about one of her pictures online. She traced him to his Facebook account which led her to the guys mothers profile. She then confronted him with screenshots of his inappropriate messages, asking if she should show his mother what her son is up to. The guy was profusely apologetic.

Across social media platforms, these women have now created a sorority of their own.

Every Gen Z woman in India, who is comfortable posting thirst traps online, is likely to follow several others like her. Inside this tiny community, people hype each other as enthusiastically as they cancel a member who isnt genuine, says Ira.

They operate under pseudonymous accounts, but a look at the list of people they follow gives an insight into their minds. It has artists, poets, activists, fake news and misinformation fighters. Satire and irony are dominating themes of their content.

For advertisers targetting Gen Z, this segment is still an enigma theyre trying to decrypt, notes Dentsu Webchutney anthropologist Dalmia.

Some of them also post pictures of celebrities they are thirsting for. Others highlight the problematic nature of 365 Days, a Polish erotica movie streaming on Netflix that has been trending on the platform in India for weeks now arguing that it glorifies molestation and abduction.

Many have developed a thick skin when it comes to receiving unwarranted comments from men on their posts. However, some question if these lot are indeed being anti-feminist since they eventually end up catering to a male fantasy of women.

Many thirst trappers end up deleting their pictures after uploading them, fearing negative attention. Some of them also worry they may attach their self-esteem to the number of likes they get on these pictures for good.

They say thirst traps are part of the larger realm of body-positivity content. But they also know that while every thirst trap is body-positive, not all body-positive posts are thirst traps.

On social media, however, all are happily welcome to co-exist.

(Illustration and graphics by Rahul Awasthi)

Read more here:

The rise of Thirst Trap culture among Gen Z Indian women - ETtech.com

Posted in Mind Uploading | Comments Off on The rise of Thirst Trap culture among Gen Z Indian women – ETtech.com

Moyra Davey and Kate Zambreno on Writing As If You Were Dead – frieze.com

Posted: at 1:32 am

Moyra Davey:Drifts [2020] is your most voluptuous and sensuous work to date, even though much of the novel is about struggle and feeling at a miserable impasse with the book you are writing. You manage to both write the problem and, simultaneously, provide the solution. You talk about block, but the writing feels like its opposite: flow. You invoke the [new] texture of boredom, the energy of the internet, its distracted nature and wonder how to invest the writing with these particular drives, how to replicate the mind wandering. You name the affect you crave for your novel and, immediately, the writing serves it up. You have found the perfect form: a novel made up of fragments, using the note-taking practice you find so vital.

I know from the conversations between you and your friends in Drifts that, like me, you prize your relationships with writer-friends, the (usually women) interlocutors who prod us, open doors and offer sympathetic guidance, often with lightning speed. Try to be with flowers, the poet Bhanu Kapil says to you in Drifts; later, in an exchange with the writer Sofia Samatar, you talk about empty[ing] a text in order to fill it. This speaks to a particular difficulty Im having with a shapeless, bloated text, about which Ive come to feel phobic. I wondered if you could expand on that particular point: the emptying out that might lead to structure.

Kate Zambreno:Theres something monstrous to the shapeless. I have a fear of it as well. I like to think of writers block, the dread of it, as resulting from too much material too many notebooks filled up. For the period I dramatize in Drifts, it was also about the desire for my work to feel private and ongoing rather than being instantly published and commodified to be read only by my correspondents, my addressees, entirely women and non-binary writers.

In the book, one of the characters, Anna, says to the narrator that the notes are the work. I tend to gravitate towards writing that is about process yours, Kapils, Samatars, Herv Guiberts and W.G. Sebalds. I dont think about structure, per se, or story, but I am interested in narrative and form and repetition. Theres such an organic flow to the form of your books Les Goddesses/Hemlock Forest [2017] and Burn the Diaries [2014] the titles, the places, the sense of travelling through that every writer who reads them begins to mimic it. These books read like they were written in the time they were conceived and are about time. When my writing feels shapeless and bloated, like it does now, malingering for years around the study of Guibert I have been working on, which was supposed to be a short text, I realize that writing is time, and must take the time it needs.

Ive always been drawn to the suspense in Thomas Bernhard, Sophie Calle, Guibert and Sebald. Their works are note-like and documentary, but also read like detective stories. Theres an atmospheric moodiness or tension, also something thats withheld from us throughout. In The Compassion Protocol [1995], Guiberts narrator says Im paraphrasing here that he most feels like hes writing fiction when hes writing in a diary. Theres a noir or speculative quality to Drifts the sense of a coded reality that the narrator is trying to figure out.

MD:The last line of Drifts mentions beauty not knowing what beauty is, but that it adheres to many things. I wondered how you would end this book, as it builds towards an almost unbearable tension: your fear of not being able to finish it, mounting material anxieties, your pregnant body about to explode. The pressure seems almost uncontainable. And then there is a pause, a muting and you re-emerge using the beautiful device of simply noting a date, 7 December, to mark the event of your daughters birth. It is the opposite of Maggie Nelsons choice to narrate the minutiae of giving birth in The Argonauts [2015], but your laconic version is extraordinary in its own way, communicating something momentous with a rare economy of means. It shifts from the compulsive, yet no less compelling, uploading of life that characterizes most of the book. Drifts gives the fantastic impression of living and writing life simultaneously, and of doing it without shame, or perhaps doing it in such a way that shame becomes beautiful.

KZ:Originally, the ending included more of the duration and exhaustion of my labour; I was in prodromal labour for almost a month. I had already written about this fugue state in Appendix Project [2019], and I always imagined Id pick it up again in Ghosts, the as-yet-unwritten novel thats supposed to be its sequel. Vertigo the second half of Drifts is elliptical and fragmentary; less an exhaustive recitation of the facts of a life and more about the claustrophobic intimacy of it. It was important to me that the book didnt show a journey of motherhood; I didnt want a baby to solve the main protagonists existential crisis, which is a crisis of the book she is trying to write. It was Samatar who told me that too much of the baby even the joy of her overdetermined the book. In a way, it goes against what some readers might want. Also, I am resistant to the ways a birth story is often told as a coherent narrative. Trauma is more fragmented, remembered later, in glimpses.

MD:The few details you give us wholly convey this bewildered state, but you make the experience completely your own. Your tender, yet slightly detached, observations of the baby and the hilarious depiction of the postpartum, scatological scene of retention/expulsion are consistent with all the earlier, non-maternal writing in Drifts. Ive read quite a bit of the literature of motherhood and your voice is like no other Ive encountered.

KZ:I want to hear more about writing and shame, its relationship to beauty, as its something I think about a lot. I wonder if its why we are both so drawn to Guibert, Kapil and David Wojnarowicz. Theres this moment at the end of Drifts where I cite you, trying to reference a work of yours, Dr. Y., Dr. Y. [2014], in which you are naked and pregnant in bed with your dog. A line from Anne Sextons Words for Dr. Y. [1978] frames the central image: Why else keep a journal, if not to examine your own filth? So much of your work, both the videos and the writing, engages with the diary or notebook the intimate space of the domestic. But theres also an intriguing opacity in your work that I identify with, in tension, perhaps, with this beautiful transparency of the daily: the refusal to go back to trauma or childhood, that space of memoir you refer to as the wet in Les Goddesses/Hemlock Forest.

MD:Shame is only ugly when its hidden. It can be breathtakingly beautiful when a writer puts it out there without fanfare. Im quite preoccupied with shame, so I home in on authors whove found ways to write it. Thats what good literature does: in the right hands, shame doesnt even exist because it becomes something else. I think it was Nadine Gordimer who said: Write as if you were dead. This is something I try to do, but I am not there yet. The artwork with my dog and me in bed is surrounded by little photos of her shitting. I thought the curve of her arched back mimicked my pregnant belly; I was no doubt projecting onto her defecation a wish to empty myself out. The unofficial title for that piece was Ante-Partum Document. I showed it to my gallerist at the time, Colin de Land, and he recoiled from it, compared it to the worst of feminist art. I dont hold any of this against him, but I was ashamed and put the piece away. I have Gregg Bordowitz to thank for encouraging me to revisit it nearly 20 years later and remake it using the Sexton quote. I was reading Sexton for another project, the video Notes on Blue [2015], and came across that line in Words for Dr. Y., which is dedicated to her analyst. Entirely coincidentally, Dr. Y. was the name I gave my shrink in the video Fifty Minutes [2006], so I titled the new piece Dr. Y., Dr. Y.

KZ:So much of your art seems to be about The Problem of Reading, to quote the title of a 2003 work of yours.

MD:There are many problems of reading. There is the research problem trying to put your hand on the right thing, and often not knowing what that is. I met a graduate student in Toronto, named Kate Whiteway, who used the expression: Being in the Eros of research. My oldest friend, the writer and translator Alison Strayer, spoke of that zone of reading as a state of bliss, when theres never a question, where one thing leads to another. But, for me, there is also the problem of being over-identified with reading, and so I am trying to change it up. In my latest work-in-progress, I originally decided there would be no citations, but then I felt utterly compelled to write about Hilton Als, Carson McCullers and Christa Wolf. I dont know that Ill ever write something that is not dependent on communion and connection.

KZ:I also feel Im often over-identified with reading. It seems people sometimes read my work to get a bibliography out of it. Which is perverse because I frequently go through periods of extreme reading allergy. So much of Drifts involves searching for books to read but finding everything too porous. Its a relief when I am reading ecstatically, when I have the time and space. Especially when Im pregnant Im again in my second trimester I cant read much. I spend a lot of time looking and thinking and feeling, and then eating and sleeping. I become like my dog. Which reminds me of that moment in Burn the Diaries, where you describe Eileen Myless passage about her dog, Rosie, shitting and you feeling a kinship looking at your own dog, Bella. I felt such an uncanny affinity reading that passage, because so much of my notetaking was observing dailiness. Im inspired by the way your mind makes connections over texts. Much of Drifts came from walking around my neighbourhood and the city, desiring to take series of photographs, whether of my dog on the porch or the bark of the trees or the feral cats or Halloween decorations. Throughout, I was thinking about images, like the 16th-century prints of Albrecht Drer, Peter Hujars photographs of animals [1960s80s], Sarah Charlesworths Stills [1980]. The book includes not only some of my amateur photographs but also collages and diptychs. I admire how you use and philosophize photography, including your own, in your writing. Was your writing practice always concurrent with your image-making practice?

MD:For a long time, I only made photographs, and dabbled in the moving image. I didnt really start to write until after editing Mother Reader [2001], at which point I wanted to take a break from photography and focus on writing and video. My most recent photographs are black and white images of chickens, horses and dogs taken with my late-1960s Hasselblad. The series was spawned partly by a recent film project and partly by a desire to actively channel Hujars animal portraits. That was a humbling learning experience. Its uncanny how we have overlapping spheres of influence and projective desire for certain artists and writers, even down to the title of your forthcoming book on Guibert, To Write as if Already Dead. I love hearing that the impulse to write Drifts was so strongly linked to your photographic drive. Maybe that is the answer to my bloated, stalled text: to reconnect again with images, as filtered through writing.

This article first appeared in frieze issue 212.

Main image: Moyra Davey, Jane (detail), 1984, gelatin silver print, 5141cm. Courtesy: the artist, greengrassi, London, and Galerie Buchholz, Berlin/Cologne/New York

More here:

Moyra Davey and Kate Zambreno on Writing As If You Were Dead - frieze.com

Posted in Mind Uploading | Comments Off on Moyra Davey and Kate Zambreno on Writing As If You Were Dead – frieze.com

What Makes A YouTube Video Hit The Trending Tab? This Data Scientist Broke Down Every Single Video That Trended In 2019. – Tubefilter

Posted: at 1:32 am

Ah, the Trending tab. YouTubes showcase of videos that a wide range of viewers would find interesting. Like many other facets of the platforms content recommendation algorithm, the Trending tab has been a frequent target of suspicion from creators who want to know more about its inner workingsnotably how and why it surfaces some seemingly popular videos, but not others.

Well likely never get a true peek under the hood from YouTube itself. But thanks to data scientist Ammar Alyousfi, we now have a massive amount of data about every single video that hit the tab in 2019, as well as corresponding conclusions about what qualities these videos tend to share.

To compile his report, Alyousfi ran an automated script that scraped data from YouTubes Trending tab every day throughout the year. According to YouTube, Trending isnt personalized and displays the same list of trending videos in each country to all users so he didnt have to account for the possibility that different videos might show up for users in different regions.

Alyousfi found that over the course of 2019, YouTubes Trending tab displayed 11,177 unique videos. If that sounds smaller than expected, its because Trending actually displayed 72,994 total entries, or around 200 videos per day, but a number of those videos trended for multiple days. For the purpose of his report, Alyousfi chose to examine data on all of the 72,994 trending videos, not on unique trending videos only, he said. The reasoning behind this is that we are interested in videos considered trending by YouTube. So if a video is considered trending for 3 days, then we believe it has more trending power and more trending characteristics than a video trending for 1 day only; thus, it should have more weight. So we include the 3 occurrences of that video in the analysis.

So, which videos had the most trending power? In 2019, six videos appeared on the Trending tab for a staggering 30 days:

Perhaps unsurprisingly, three of them are music videos, and two are related to mega-popular kpop band BTSwhich was also behind YouTubes most-watched Trending video of 2019. The music video for Boy With Luv, its Halsey collaboration, had 195,376,667 views when it first appeared on the tab April 23,Alyousfi found. (For scale, he found 90% of videos hit Trending for the first time when they had less than 2,752,317 views. The smallest number of views a Trending video had when it entered was 53,796, and the average view count was 1,387,466.)

None of the longest-trending videos came from YouTube channels that most frequently produced trending content. Alyousfis data showed that, globally, the top Trending channel of 2019 was Canadian YouTuber Linus Sebastians Linus Tech Tips(11 million subscribers, 120 million views per month), which had a whopping 365 uploads appear on the tab. His channel was closely followed by cooking-focused Binging with Babish (7.3 million, 70 million), which produced around 360 Trending videos.

Other top Trending channels include: culinary magazine Bon Apptit (likely thanks to its incredibly popular, recently controversial series Bon Apptit Test Kitchen) with 355 videos; life hack channel The King of Random (12 million, 40 million) with 350; tech creator and YouTube Original star Marques Brownlee (11 million, 60 million) with 350;WWE (62 million, 1.5 billionyes, seriously, 1.5 billion views per month) with around 345; and Tati Westbrook (9.3 million, 10 million) with 330.

Here are all 19 top Trending channels:

Creators have long wondered whether uploading on specific days or at specific times, using all caps in their titles, or having lengthy/link-riddled descriptions affects the reach of their content. Alyousfi broke down these and a few more hypotheses to find out if any, well, trends show up amongst videos that appeared on the tab.

He found that Trending uploads were spread pretty evenly across days of the week. Tuesday, with 11,986 trending videos, was the highest posting day, while Saturday (7,345) lagged noticeably behind all other days. As for time of day, he found that videos uploaded between noon and 2 p.m. Eastern were the most likely to hit Trending, while videos uploaded between 6 a.m. and 11 a.m. Eastern were the least.

With that in mind, though, its worth noting that the majority of videos did not appear on Trending on the actual day they were published. On average, a video appears on the trending list after 5.6 days of publishing, Alyousfi wrote. Also, 95% of the videos took less than 13 days to appear.

His data showed several additional trends among video titles, including: a full 50% of Trending videos had no all-caps words in their titles; titles were generally between 36 and 64 characters in length; and the most common words used in Trending titles were official, video, 2019, vs, trailer, music, game, new, highlights, first, and challenge. (Also, the fire emoji was the most commonly used on Trending videos.)

One of the last findings Alyousfi discusses is video tags. He says almost all Trending videos used tags, and the average number used per video is 21. But, he notes, YouTube tells creators that, Tags can be useful if content in your video is commonly misspelled. Otherwise, tags play a minimal role in helping viewers find your video.

But if that was true, why would YouTube add a lot of tags to their videos? he asks, pointing out that YouTubes 2019 Rewind video had 39 tags. He didnt reach any concrete conclusions about whether tags affect video surfacing, but said that just 3.5% of Trending videos had no tags.

You can see his full report here.

Follow this link:

What Makes A YouTube Video Hit The Trending Tab? This Data Scientist Broke Down Every Single Video That Trended In 2019. - Tubefilter

Posted in Mind Uploading | Comments Off on What Makes A YouTube Video Hit The Trending Tab? This Data Scientist Broke Down Every Single Video That Trended In 2019. – Tubefilter

The path to real-world artificial intelligence – TechRepublic

Posted: at 1:31 am

Experts from MIT and IBM held a webinar this week to discuss where AI technologies are today and advances that will help make their usage more practical and widespread.

Image: Sompong Rattanakunchon / Getty Images

Artificial intelligence has made significant strides in recent years, but modern AI techniques remain limited, a panel of MIT professors and IBM's director of the Watson AI Lab said during a webinar this week.

Neural networks can perform specific, well-defined tasks but they struggle in real-world situations that go beyond pattern recognition and present obstacles like limited data, reliance on self-training, and answering questions like "why" and "how" versus "what," the panel said.

The future of AI depends on enabling AI systems to do something once considered impossible: Learn by demonstrating flexibility, some semblance of reasoning, and/or by transferring knowledge from one set of tasks to another, the group said.

SEE: Robotic process automation: A cheat sheet (free PDF) (TechRepublic)

The panel discussion was moderated by David Schubmehl, a research director at IDC, and it began with a question he posed asking about the current limitations of AI and machine learning.

"The striking success right now in particular, in machine learning, is in problems that require interpretation of signalsimages, speech and language," said panelist Leslie Kaelbling, a computer science and engineering professor at MIT.

For years, people have tried to solve problems like detecting faces and images and directly engineering solutions that didn't work, she said.

We have become good at engineering algorithms that take data and use that to derive a solution, she said. "That's been an amazing success." But it takes a lot of data and a lot of computation so for some problems formulations aren't available yet that would let us learn from the amount of data available, Kaelbling said.

SEE:9 super-smart problem solvers take on bias in AI, microplastics, and language lessons for chatbots(TechRepublic)

One of her areas of focus is in robotics, and it's harder to get training examples there because robots are expensive and parts break, "so we really have to be able to learn from smaller amounts of data," Kaelbling said.

Neural networks and deep learning are the "latest and greatest way to frame those sorts of problems and the successes are many," added Josh Tenenbaum, a professor of cognitive science and computation at MIT.

But when talking about general intelligence and how to get machines to understand the world there is still a huge gap, he said.

"But on the research side really exciting things are starting to happen to try to capture some steps to more general forms of intelligence [in] machines," he said. In his work, "we're seeing ways in which we can draw insights from how humans understand the world and taking small steps to put them in machines."

Although people think of AI as being synonymous with automation, it is incredibly labor intensive in a way that doesn't work for most of the problems we want to solve, noted David Cox, IBM director of the MIT-IBM Watson AI Lab.

Echoing Kaelbling, Cox said that leveraging tools today like deep learning requires huge amounts of "carefully curated, bias-balanced data," to be able to use them well. Additionally, for most problems we are trying to solve, we don't have those "giant rivers of data" to build a dam in front of to extract some value from that river, Cox said.

Today, companies are more focused on solving some type of one-off problem and even when they have big data, it's rarely curated, he said. "So most of the problems we love to solve with AIwe don't have the right tools for that."

That's because we have problems with bias and interpretability with humans using these tools and they have to understand why they are making these decisions, Cox said. "They're all barriers."

However, he said, there's enormous opportunity looking at all these different fields to chart a path forward.

That includes using deep learning, which is good for pattern recognition, to help solve difficult search problems, Tenenbaum said.To develop intelligent agents, scientists need to use all the available tools, said Kaelbling. For example, neural networks are needed for perception as well as higher level and more abstract types of reasoning to decide, for example, what to make for dinner or to decide how to disperse supplies.

"The critical thing technologically is to realize the sweet spot for each piece and figure out what it is good at and not good at. Scientists need to understand the role each piece plays," she said.

The MIT and IBM AI experts also discussed a new foundational method known as neurosymbolic AI, which is the ability to combine statistical, data-driven learning of neural networks with the powerful knowledge representation and reasoning of symbolic approaches.

Moderator Schubmehl commented that having a combination of neurosymbolic AI and deep learning "might really be the holy grail" for advancing real-world AI.

Kaelbling agreed, adding that it may be not just those two techniques but include others as well.

One of the themes that emerged from the webinar is that there is a very helpful confluence of all types of AI that are now being used, said Cox. The next evolution of very practical AI is going to be understanding the science of finding things and building a system we can reason with and grow and learn from, and determine what is going to happen. "That will be when AI hits its stride," he said.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Link:

The path to real-world artificial intelligence - TechRepublic

Posted in Ai | Comments Off on The path to real-world artificial intelligence – TechRepublic

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – VentureBeat

Posted: at 1:31 am

Take the latest VB Survey to share how your company is implementing AI today.

Researchers from Googles DeepMind and the University of Oxford recommend that AI practitioners draw on decolonial theory to reform the industry, put ethical principles into practice, and avoid further algorithmic exploitation or oppression.

The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations.

The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present, the paper reads. This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationshipsand employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that an indifferent field serves the powerful. VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.

The DeepMind paper interrogates how colonial features are found in algorithmic decision-making systems and what the authors call sites of coloniality, or practices that can perpetuate colonial AI. These include beta testing on disadvantaged communities like Cambridge Analytica conducting tests in Kenya and Nigeria or Palantir using predictive policing to target Black residents of New Orleans. Theres also ghost work, the practice of relying on low-wage workers for data labeling and AI system development. Some argue ghost work can lead to the creation of a new global underclass.

The authors define algorithmic exploitation as the ways institutions or businesses use algorithms to take advantage of already marginalized people and algorithmic oppression as the subordination of a group of people and privileging of another through the use of automation or data-driven predictive systems.

Ethics principles from groups like G20 and OECD feature in the paper, as well as issues like AI nationalism and the rise of the U.S. and China as AI superpowers.

Power imbalances within the global AI governance discourse encompasses issues of data inequality and data infrastructure sovereignty, but also extends beyond this. We must contend with questions of who any AI regulatory norms and standards are protecting, who is empowered to project these norms, and the risks posed by a minority continuing to benefit from the centralization of power and capital through mechanisms of dispossession, the paper reads. Tactics the authors recommend include political community action, critical technical practice, and drawing on past examples of resistance and recovery from colonialist systems.

A number of members of the AI ethics community, from relational ethics researcher Abeba Birhane to Partnership on AI, have called on machine learning practitioners to place people who are most impacted by algorithmic systems at the center of development processes. The paper explores concepts similar to those in a recent paper about how to combat anti-Blackness in the AI community, Ruha Benjamins concept of abolitionist tools, and ideas of emancipatory AI.

The authors also incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.

See the original post:

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism - VentureBeat

Posted in Ai | Comments Off on DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – VentureBeat

AI In Human Resources To AI or not to AI? – Analytics Insight

Posted: at 1:31 am

Every department in a company has its own challenges.

In the case of Human Resources, recruitment and onboarding processes, employee orientations, process paperwork, and background checks is a handful and many a time painstaking mostly because of the repetitive and manual nature of the work. The most challenging of all is engaging with employees on human grounds to understand their needs.

As leaders today are observing the AI revolution across every process, Human resources is no exception: there has been a visible wave of AI disruption across HR functions. According to an IBMs survey from 2017, among 6000 executives, 66% of CEOs believe that cognitive computing can drive compelling value in HR while half of the HR personnel believe this may affect roles in the HR organization. The study clearly exhibits the apprehension of HR executives caused by the AI disruption in their field.

While one aspect of AI is creating uneasiness: the other is promising convenience. AI aims to empower the HR department with the right knowledge to optimize processes with less manual power and guarantees to mitigate errors.

TheCOVID-19 pandemic has highlighted thepower of AIin real-time< Backlink-https://us.sganalytics.com/blog/ai-can-detect-infections-with-96-percent-accuracy-can-ai-predict-the-next-pandemic/>, including its shortcomings. At the crux of the AI evolution is the minimization of human labored processes. Sophisticated AI algorithms can analyze large amounts of data in no time and self-educate themselves to recognize and map patterns, which can come in handy for HR staffs to plan and operate strategically.

While a human can be biased, get bored and make unintended mistakes provoking inadequacy in productivity and efficiency, AI programs are unbiased and diligent, enabling more productivity and efficiency.

HR executives who perform tasks like applicant tracking, payroll, training, and job postings manually without automation, state that they spend 14 hours a week on an average on these tasks. Leveraging AI to automate these HR processes can be extremely pertinent for meeting the following key business requirements: First, save time and increase efficiency; Second, provide real-time responses and solutions that meet employee expectations.

As per a Mckinseys study AI will drastically change business regardless of the industry. AI could potentially deliver an additional economic output of around $13 trillion by 2030, boosting global GDP by about 1.2 percent a year.

Lets dive deeper to understand how AI can help sophisticate HR processes while not necessarily replacing human resource personnel.

1. Improved Employee Experience

Employees are the first customers for any organization. Hence employee experience is as important as customer experience.

As employee experience is becoming the next competitive edge for businesses, the coming days will be focused on providing personalized engagement and improving employee experience for human resources.

According to a Deloitte survey, 80% HR executives rate employee experience as important, while only 22% believe their organization excels at providing a differentiated employee experience.

Additionally, the advent of smart workplace has raised the bars of employees expectations for work-space experience and engagement factors.

Jennifer Stroud, HR Evangelist & Transformation Leader atServiceNow, says,We have seen the need for chatbots, AI and machine learning in the workplace to drive more productivity as well as modern, consumerized employee experiences. These consumer technology solutions are exactly what employees want in the workplace.

Engaging AI can help the HR department provide personalized employee engagement experiences across the entire employee lifecycle, right from recruitment and onboarding to career pathing.

2. Empowering HRs to make Data-Driven decisions

In common, the data-to-decision workflow looks like the below figure for many people.

Source: jobsoffice

Many HR technologies still follow the above workflow and depend on manual methods to glean insights from data. This task grows tedious and creates a bottleneck for end-users (data analysts) to draw insights within the stipulated time leading to decision making on outdated data.

While frontier technologies like data analytics are advancing to provide real-time data to make fast and fact-based decisions, AI can assist Human Resource professionals in harnessing this real-time data and making quick, consistent, and data-driven decisions. After all, the bottom line of HR agility is decision making.

3. Intelligent Automation

Intelligent automation fuses automation with AI. This will enable machines to make human-like decisions by self-educating themselves. Apart from augmenting productivity and efficiency for repetitive manual processes, this can help remove human interventions deployed for automated process completely.

1.More work in less time!

Crafting job descriptions for a particular role, filtering resumes and analyzing skillsets to find the apt talent is not only tiring and tedious, but also tricky for human resource professionals as a simple overlooked aspect can lead to a significant mistake, which may cost the company dearly in the long run. Well, AI can help HR staff overcome such scenarios by crafting bespoken job descriptions automatically and assist them in reading through thousands of resumes within a short time, thus effectively reducing the time and manual hard work put in by recruiters.

2. Identify the right talent without bias

HR personnel are humans and are likely to exhibit bias subconsciously. AI, on the other hand, is immune to human emotions which makes it the perfect fit to process candidate profiles based on required skillset without any disregard for candidates age, race, gender, geographic areas or organizational relationship. An unbiased recruitment is a win-win for both HR staff and organization. Furthermore, AI can be instrumental in increasing retention rates and establishing cultural diversity.

Consider programs like Texito, they help recognize gender bias in ads enabling recruiters to embrace a neutral language.

3. Streamline employee onboarding

The first day of an employee in an organization is like the first day of a transferred student in a new school. Although employees are grown-ups and possess the cognitive intelligence to adapt easily to an environment, deep down they look for a guidance to help them settle down in a new environment. Fortunately, organizations have HR staff to do this job. Employees generally have numerous queries on their first day regarding company policies, leaves, compensations, notice period, insurance claims, etc. As intriguing as the questions may be for an employee, these queries may turn repetitive and exhausting for an HR personnel over the time. Engaging AI chatbots makes it simple to answer such repetitive questions and make more time for the HR staff to concentrate on other essential tasks.

4. Optimize employee engagement to build better relationships

Apart from recruitment and onboarding, AI can be used to streamline processes like scheduling meetings, training employees and other such business processes. AIs capabilities to recognize personas will help Human resources professionals understand the human aspect of every single employee in-depth and enable them to shape a friendly and exciting company culture to provide unique and personalized employee engagement experiences.

5. Manage employee churn

Understanding factors that cause and arrest employee churn is the toughest part of an HRs job. People change jobs for various reasons like financial growth, career growth, shift in profiles, unsatisfied work environment, etc. Leveraging AI capabilities can help the HR department in continuously monitoring and evaluating employees thoughts about the organization, work culture, the degree of satisfaction with their job, etc. Knowing what offends or drives an employee can help in underlining the employee churn factors precisely. AI can help HR executives in performing this task more precisely.

All said and done, even though AIs capabilities would help reduce manual work and boost efficiency and productivity, artificial intelligence doesnt possess the emotional intelligence of humans. AI also cannot compensate for the humane connection that HR personnel form with employees and leverage to drive engagement and responsiveness.

Therefore, to answer the critical question that haunts HR executives Will AI be the reason why I might lose my job? No. Not really. The whole idea of AI in HR is the integration of technology to automate the more monotonous HR related tasks and optimize processes to add value to human work in less time. In the AI era, new jobs will evolve that will have new skills requirements unleashing the evolution of the HR function in an AI-first world.

Author Detail:

Jency is a technology content writer with SG Analytics. She contributes to the companys advancements by writing creative and engaging for their website and blogs. Her hobbies include music, reading, and trekking.

Company designation: Content Writer, SG Analytics

Location: Pune

Links to my blogs: https://us.sganalytics.com/blog/75-percent-consumers-anticipate-financial-impact-effects-of-covid-on-consumer-behaviour/, https://us.sganalytics.com/blog/social-media-analytics-is-truly-a-game-changer-heres-why/

Social media profile: LinkedIn https://www.linkedin.com/in/jency-durairaj-21225aa9

Excerpt from:

AI In Human Resources To AI or not to AI? - Analytics Insight

Posted in Ai | Comments Off on AI In Human Resources To AI or not to AI? – Analytics Insight

Reducing bias in AI-based financial services – Brookings Institution

Posted: at 1:31 am

Artificial intelligence (AI) presents an opportunity to transform how we allocate credit and risk, and to create fairer, more inclusive systems. AIs ability to avoid the traditional credit reporting and scoring system that helps perpetuate existing bias makes it a rare, if not unique, opportunity to alter the status quo. However, AI can easily go in the other direction to exacerbate existing bias, creating cycles that reinforce biased credit allocation while making discrimination in lending even harder to find. Will we unlock the positive, worsen the negative, or maintain the status quo by embracing new technology?

This paper proposes a framework to evaluate the impact of AI in consumer lending. The goal is to incorporate new data and harness AI to expand credit to consumers who need it on better terms than are currently provided. It builds on our existing systems dual goals of pricing financial services based on the true risk the individual consumer poses while aiming to prevent discrimination (e.g., race, gender, DNA, marital status, etc.). This paper also provides a set of potential trade-offs for policymakers, industry and consumer advocates, technologists, and regulators to debate the tensions inherent in protecting against discrimination in a risk-based pricing system layered on top of a society with centuries of institutional discrimination.

AI is frequently discussed and ill defined. Within the world of finance, AI represents three distinct concepts: big data, machine learning, and artificial intelligence itself. Each of these has recently become feasible with advances in data generation, collection, usage, computing power, and programing. Advances in data generation are staggering: 90% of the worlds data today were generated in the past two years, IBM boldly stated. To set parameters of this discussion, below I briefly define each key term with respect to lending.

Big data fosters the inclusion of new and large-scale information not generally present in existing financial models. In consumer credit, for example, new information beyond the typical credit-reporting/credit-scoring model is often referred to by the most common credit-scoring system, FICO. This can include data points, such as payment of rent and utility bills, and personal habits, such as whether you shop at Target or Whole Foods and own a Mac or a PC, and social media data.

Machine learning (ML) occurs when computers optimize data (standard and/or big data) based on relationships they find without the traditional, more prescriptive algorithm. ML can determine new relationships that a person would never think to test: Does the type of yogurt you eat correlate with your likelihood of paying back a loan? Whether these relationships have casual properties or are only proxies for other correlated factors are critical questions in determining the legality and ethics of using ML. However, they are not relevant to the machine in solving the equation.

What constitutes true AI is still being debated, but for purposes of understanding its impact on the allocation of credit and risk, lets use the term AI to mean the inclusion of big data, machine learning, and the next step when ML becomes AI. One bank executive helpfully defined AI by contrasting it with the status quo: Theres a significant difference between AI, which to me denotes machine learning and machines moving forward on their own, versus auto-decisioning, which is using data within the context of a managed decision algorithm.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI. The foundation is a set of laws from the 1960s and 1970s (Equal Credit Opportunity Act of 1974, Truth in Lending Act of 1968, Fair Housing Act of 1968, etc.) that were based on a time with almost the exact opposite problems we face today: not enough sources of standardized information to base decisions and too little credit being made available. Those conditions allowed rampant discrimination by loan officers who could simply deny people because they didnt look credit worthy.

Today, we face an overabundance of poor-quality credit (high interest rates, fees, abusive debt traps) and concerns over the usage of too many sources of data that can hide as proxies for illegal discrimination. The law makes it illegal to use gender to determine credit eligibility or pricing, but countless proxies for gender exist from the type of deodorant you buy to the movies you watch.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI.

The key concept used to police discrimination is that of disparate impact. For a deep dive into how disparate impact works with AI, you can read my previous work on this topic. For this article, it is important to know that disparate impact is defined by the Consumer Financial Protection Bureau as when: A creditor employs facially neutral policies or practices that have an adverse effect or impact on a member of a protected class unless it meets a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact.

The second half of the definition provides lenders the ability to use metrics that may have correlations with protected class elements so long as it meets a legitimate business need,andthere are no other ways to meet that interest that have less disparate impact. A set of existing metrics, including income, credit scores (FICO), and data used by the credit reporting bureaus, has been deemed acceptable despite having substantial correlation with race, gender, and other protected classes.

For example, consider how deeply correlated existing FICO credit scores are with race. To start, it is telling how little data is made publicly available on how these scores vary by race. The credit bureau Experian is eager to publicize one of its versions of FICO scores by peoples age, income, and even what state or city they live in, but not by race. However, federal law requires lenders to collect data on race for home mortgage applications, so we do have access to some data. As shown in the figure below, the differences are stark.

Among people trying to buy a home, generally a wealthier and older subset of Americans, white homebuyers have an average credit score 57 points higher than Black homebuyers and 33 points higher than Hispanic homebuyers. The distribution of credit scores is also sharply unequal: More than 1 in 5 Black individuals have FICOs below 620, as do 1 in 9 among the Hispanic community, while the same is true for only 1 out of every 19 white people. Higher credit scores allow borrowers to access different types of loans and at lower interest rates. One suspects the gaps are even broader beyond those trying to buy a home.

If FICO were invented today, would it satisfy a disparate impact test? The conclusion of Rice and Swesnik in their law review article was clear: Our current credit-scoring systems have a disparate impact on people and communities of color. The question is mute because not only is FICO grandfathered, but it has also become one of the most important factors used by the financial ecosystem. I have described FICO as the out of tune oboe to which the rest of the financial orchestra tunes.

New data and algorithms are not grandfathered and are subject to the disparate impact test. The result is a double standard whereby new technology is often held to a higher standard to prevent bias than existing methods. This has the effect of tilting the field against new data and methodologies, reinforcing the existing system.

Explainability is another core tenant of our existing fair lending system that may work against AI adoption. Lenders are required to tell consumers why they were denied. Explaining the rationale provides a paper trail to hold lenders accountable should they be engaging in discrimination. It also provides the consumer with information to allow them to correct their behavior and improve their chances for credit. However, an AIs method to make decisions may lack explainability. As Federal Reserve Governor Lael Brainard described the problem: Depending on what algorithms are used, it is possible that no one, including the algorithms creators, can easily explain why the model generated the results that it did. To move forward and unlock AIs potential, we need a new conceptual framework.

To start, imagine a trade-off between accuracy (represented on the y-axis) and bias (represented on the x-axis). The first key insight is that the current system exists at the intersection of the axes we are trading off: the graphs origin. Any potential change needs to be considered against the status-quonot an ideal world of no bias nor complete accuracy. This forces policymakers to consider whether the adoption of a new system that contains bias, but less than that in the current system, is an advance. It may be difficult to embrace an inherently biased framework, but it is important to acknowledge that the status quo is already highly biased. Thus, rejecting new technology because it contains some level of bias does not mean we are protecting the system against bias. To the contrary, it may mean that we are allowing a more biased system to perpetuate.

As shown in the figure above, the bottom left corner (quadrant III) is one where AI results in a system that is more discriminatory and less predictive. Regulation and commercial incentives should work together against this outcome. It may be difficult to imagine incorporating new technology that reduces accuracy, but it is not inconceivable, particularly given the incentives in industry to prioritize decision-making and loan generation speed over actual loan performance (as in the subprime mortgage crisis). Another potential occurrence of policy moving in this direction is the introduction of inaccurate data that may confuse an AI into thinking it has increased accuracy when it has not. The existing credit reporting system is rife with errors: 1 out of every 5 people may have material error on their credit report. New errors occur frequentlyconsider the recent mistake by one student loan servicer that incorrectly reported 4.8 million Americans as being late on paying their student loans when in fact in the government had suspended payments as part of COVID-19 relief.

The data used in the real world are not as pure as those model testing. Market incentives alone are not enough to produce perfect accuracy; they can even promote inaccuracy given the cost of correcting data and demand for speed and quantity. As one study from the Federal Reserve Bank of St. Louis found, Credit score has not acted as a predictor of either true risk of default of subprime mortgage loans or of the subprime mortgage crisis. Whatever the cause, regulators, industry, and consumer advocates ought to be aligned against the adoption of AI that moves in this direction.

The top right (quadrant I) represents incorporation of AI that increases accuracy and reduces bias. At first glance, this should be a win-win. Industry allocates credit in a more accurate manner, increasing efficiency. Consumers enjoy increased credit availability on more accurate terms and with less bias than the existing status quo. This optimistic scenario is quite possible given that a significant source of existing bias in lending stems from the information used. As the Bank Policy Institute pointed out in its in discussion draft of the promises of AI: This increased accuracy will benefit borrowers who currently face obstacles obtaining low-cost bank credit under conventional underwriting approaches.

One prominent example of a win-win system is the use of cash-flow underwriting. This new form of underwriting uses an applicants actual bank balance over some time frame (often one year) as opposed to current FICO based model which relies heavily on seeing whether a person had credit in the past and if so, whether they were ever in delinquency or default. Preliminary analysis by FinReg Labs shows this underwriting system outperforms traditional FICO on its own, and when combined with FICO is even more predictive.

Cash-flow analysis does have some level of bias as income and wealth are correlated with race, gender, and other protected classes. However, because income and wealth are acceptable existing factors, the current fair-lending system should have little problem allowing a smarter use of that information. Ironically, this new technology meets the test because it uses data that is already grandfathered.

That is not the case for other AI advancements. New AI may increase credit access on more affordable terms than what the current system provides and still not be allowable. Just because AI has produced a system that is less discriminatory does not mean it passes fair lending rules. There is no legal standard that allows for illegal discrimination in lending because it is less biased than prior discriminatory practices. As a 2016 Treasury Department study concluded, Data-driven algorithms may expedite credit assessments and reduce costs, they also carry the risk of disparate impact in credit outcomes and the potential for fair lending violations.

For example, consider an AI that is able, with a good degree of accuracy, to detect a decline in a persons health, say through spending patterns (doctors co-pays), internet searches (cancer treatment), and joining new Facebook groups (living with cancer). Medical problems are a strong indicator of future financial distress. Do we want a society where if you get sick, or if a computer algorithm thinks you are ill, that your terms of credit decrease? That may be a less biased system than we currently have, and not one that policymakers and the public would support. Of all sudden what seems like a win-win may not actually be one that is so desirable.

AI that increases accuracy but introduces more bias gets a lot of attention, deservedly so. This scenario represented in the top left (quadrant II) of this framework can range from the introduction of data that are clear proxies for protected classes (watch Lifetime or BET on TV) to information or techniques that, on a first glance, do not seem biased but actually are. There are strong reasons to believe that AI will naturally find proxies for race, given that there are large income and wealth gaps between races. As Daniel Schwartz put it in his article on AI and proxy discrimination: Unintentional proxy discrimination by AIs is virtually inevitable whenever the law seeks to prohibit discrimination on the basis of traits containing predictive information that cannot be captured more directly within the model by non-suspect data.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered. Think about the potential to use whether or not a person uses a Mac or PC, a factor that is both correlated to race and whether people pay back loans, even controlling for race.

Duke Professor Manju Puri and co-authors were able to build a model using non-standard data that found substantial predictive power in whether a loan was repaid through whether that persons email address contained their name. Initially, that may seem like a non-discriminatory variable within a persons control. However, economists Marianne Bertrand and Sendhil Mullainathan have shown African Americans with names heavily associated with their race face substantial discrimination compared to using race-blind identification. Hence, it is quite possible that there is a disparate impact in using what seems like an innocuous variable such as whether your name is part of your email address.

The question for policymakers is how much to prioritize accuracy at a cost of bias against protected classes. As a matter of principle, I would argue that our starting point is a heavily biased system, and we should not tolerate the introduction of increased bias. There is a slippery slope argument of whether an AI produced substantial increases in accuracy with the introduction of only slightly more bias. Afterall, our current system does a surprisingly poor job of allocating many basic credits and tolerates a substantially large amount of bias.

Industry is likely to advocate for inclusion of this type of AI while consumer advocates are likely to oppose its introduction. Current law is inconsistent in its application. Certain groups of people are afforded strong anti-discrimination protection against certain financial products. But again, this varies across financial product. Take gender for example. It is blatantly illegal under fair lending laws to use gender or any proxy for gender in allocating credit. However, gender is a permitted use for price difference for auto insurance in most states. In fact, for brand new drivers, gender may be the single biggest factor used in determining price absent any driving record. America lacks a uniform set of rules on what constitutes discrimination and what types of attributes cannot be discriminated against. Lack of uniformity is compounded by the division of responsibility between federal and state governments and, within government, between the regulatory and judicial system for detecting and punishing crime.

The final set of trade-offs involve increases in fairness but reductions in accuracy (quadrant IV in the bottom right). An example includes an AI with the ability to use information about a persons human genome to determine their risk of cancer. This type of genetic profiling would improve accuracy in pricing types of insurance but violates norms of fairness. In this instance, policymakers decided that the use of that information is not acceptable and have made it illegal. Returning to the role of gender, some states have restricted the use of gender in car insurance. California most recently joined the list of states no longer allowing gender, which means that pricing will be more fair but possibly less accurate.

Industry pressures would tend to fight against these types of restrictions and press for greater accuracy. Societal norms of fairness may demand trade-offs that diminish accuracy to protect against bias. These trade-offs are best handled by policymakers before the widespread introduction of this information such as the case with genetic data. Restricting the use of this information, however, does not make the problem go away. To the contrary, AIs ability to uncover hidden proxies for that data may exacerbate problems where society attempts to restrict data usage on the grounds of equity concerns. Problems that appear solved by prohibitions then simply migrate into the algorithmic world where they reappear.

The underlying takeaway for this quadrant is one in which social movements that expand protection and reduce discrimination are likely to become more difficult as AIs find workarounds. As long as there are substantial differences in observed outcomes, machines will uncover differing outcomes using new sets of variables that may contain new information or may simply be statistically effective proxies for protected classes.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed. The data explosion coupled with the significant growth in ML and AI offers tremendous opportunity to rectify substantial problems in the current system. Existing anti-discrimination frameworks are ill-suited to this opportunity. Refusing to hold new technology to a higher standard than the status quo results in an unstated deference to the already-biased current system. However, simply opening the flood gates under the rules of can you do better than today opens up a Pandoras box of new problems.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed.

Americas fractured regulatory system, with differing roles and responsibilities across financial products and levels of government, only serves to make difficult problems even harder. With lacking uniform rules and coherent frameworks, technological adoption will likely be slower among existing entities setting up even greater opportunities for new entrants. A broader conversation regarding how much bias we are willing to tolerate for the sake of improvement over the status quo would benefit all parties. That requires the creation of more political space for sides to engage in a difficult and honest conversation. The current political moment in time is ill-suited for that conversation, but I suspect that AI advancements will not be willing to wait until America is more ready to confront these problems.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative, and Apple, Facebook, and IBM provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Go here to see the original:

Reducing bias in AI-based financial services - Brookings Institution

Posted in Ai | Comments Off on Reducing bias in AI-based financial services – Brookings Institution