Brooke Wagen: Making Space for Connection in Practice | Dell Medical School – Dell Medical School

But what often goes overlooked is the true toll physicians experience during training. While navigating major life changes like moving across the country and establishing community within a new home and work environment, residents and fellow physicians are learning to maneuver health systems and also often bear the weight of their patients worst days.

Brooke Wagen, a hospice and palliative medicine fellow, describes this transition as a loss of self, an issue she aims to bring attention to and alleviate through her Distinction in Care Transformation project, the Lazarus Phenomenon, which provides her peer residents and fellows space to gather and share their stories.

As a member of Dell Meds inaugural class of students, I had a goal to provide excellent medical care to Austins oldest and most medically underserved residents. Prior to medical school, I was a mom, neighbor, church member, Meals on Wheels driver and volunteer medical interpreter. Connections to my neighborhood and community were, and are, vital.

The journey through medical school, residency and fellowship is long and tumultuous. I am grateful to have stayed in my home and hometown for all my training, but I have seen firsthand the need for more rootedness and connection for trainees due to multiple moves between schools and cities.

The Lazarus Phenomenon is a storytelling evening specifically for residents and fellow physicians. Trainees often lack space and time to process, connect and reflect on what theyve seen and felt. The goal is to allow them to share parts of their life and experiences in medical training with their peers across all departments.

I hope we can build connection and community through our shared vulnerability. Storytelling can be intimidating, perhaps, but my hope is that imperfection and authenticity lead. As people tell stories or give storytellers a place to be heard, they build connections one evening at a time.

I was talking with my neighbor across the street many years ago, and he told me his hearing kept getting worse, but his doctors werent doing anything about it. After some consideration, we decided I would go with him to his next appointment as an advocate and interpreter. I discovered that he never had the opportunity to voice his fears or needs at a typical visit due to a lack of a patient-doctor relationship and language barriers. He told the doctor that his hearing had been getting worse, but nothing was getting done about it. We learned that his audiology referral just to get his hearing tested had been sitting on an empty desk, and there was no longer an audiologist for his clinic.

Getting health care often requires a champion, which is why, as I take care of people now, I think of my neighbor. I try to see and attend to each persons concerns with the same level of personal attention he needed before hearing aids and a cochlear implant.

Ian Maclaren

This quote has resonated with me since first hearing it. The idea that everyone we meet in life, on the street, at work, in the hospital is fighting their own battle is central to my ethic in medicine. My faith compels me to be still, to listen and to see the intrinsic value in each human soul I meet. My vocation calls me to sit with them in their suffering and ask what I can do to serve or help or at least to be present as each is within my power.

I want my co-workers to see me this way, as a friend and colleague who noticed that they too were in the midst of a battle, and was kind.

More here:
Brooke Wagen: Making Space for Connection in Practice | Dell Medical School - Dell Medical School

Florida is trying to catch up to the retention rate of medical residents in other states – Florida Phoenix

The Florida Senate wants to spend $70 million to increase the number of physicians training to practice medicine in the state, which lags behind other states in retaining its trainees.

Senators in the Health Policy Committee on Tuesday approved legislation SBP 7016 a part of the Live Healthy initiative seeking to bolster the healthcare workforce in Florida.

This is my 22nd session in the Florida Legislature, and I have never seen a bill that has the dramatic changes and advancements and the ability to really incentivize people come to Florida, Republican Sen. Gayle Harrell said during the committee meeting on Tuesday.

Among the provisions of the bill is the $70 million recurring allocation for the Slots for Doctors Program.

Those funds would be divided into $100,000 per slot to create 700 additional slots for medical school graduates to complete their training in Florida. Medical school graduates must go through a residency program to become fully licensed physicians. This year, Florida provided funding for 6,176 residents, according to a report from the Florida Agency for Health Care Administration.

The multimillion-dollar investment comes with some requirements. Institutions sponsoring residencies must provide annual reports detailing how they spend state funds and information about whether the positions are filled or unfilled. At the end of the physicians residency, the institutions would have to request they fill out an exit survey developed by AHCA asking the following questions:

The answers to the survey also have to be turned into AHCA annually. Institutions have to submit those reports to maintain eligibility for the funds, according to the bill.

The bill also establishes a Graduate Medical Education Committee composed of medical school deans and representatives of medical boards and associations appointed by the governor, the secretary of Health Care Administration, the state surgeon general, the senate president and the house speaker. The goal of the GME committee is to produce an annual report about the status of the resident workforce.

The nonprofit Community Health of South Florida, Inc. has trained 42 residents over nine years, said Peter Wood, CHIs vice president for planning and government affairs. A little more than half remained in Florida after their residency and CHI hired six of them.

Were hopeful that the state may be able to provide these additional fundings that would enable us to increase the number of residency slots that were able to manage, he said in a phone interview with Florida Phoenix.

He continued: The total [number of residents CHI has] is 35, we can increase that. So, in essence, increasing the number of primary care providers in the pipeline that have been trained to provide health care to underserved communities and be prepared to provide that quality of care to communities in rural areas here in Florida.

CHI funds its residencies primarily through federal dollars from the Health Resources and Services Administration.

Aside from the 700 residency slots, Live Healthy has $40 million set aside for non-profits like CHI. The bill would establish the Training, Education, and Clinicals in Health (TEACH) Funding Program to fund residencies and clinical rotations at nonprofit health centers in medically underserved areas like. Facilities that qualify can apply for reimbursement for the administrative cost and loss of revenue associated with training residents and people studying medicine, dentistry, nursing and behavioral health.

The program would reimburse up to $75,000 per fiscal year for facilities training students and $100,000 for facilities training medical school graduates. AHCA will have greater oversight of this program to evaluate its effectiveness, according to the bill.

TEACH gives preference to students and graduates of Florida schools and people whose permanent residency is within the state.

We want Florida physicians going into residencies to stay in Florida, Republican Sen. Colleen Burton said. She is the chair of the Health Policy Committee.

People who graduate from medical schools in Florida and also complete their residencies in the state are more likely to stay.The Office of Program Policy Analysis and Government Accountability recommended strengthening the pipeline of physicians who receive graduate medical education in the state and stay to practice medicine by prioritizing Florida medical school graduates.

Between 2008 and 2015, Florida retained 75 percent of its residents who also graduated from medical school in the state, according to an OPPAGA report. However, over the past decade, 35 percent of physicians left Florida after their residency ended, according to the Association of American Medical Colleges.

Across the country, California has kept 76 percent of its physicians after completing training, making it the state with the highest retention rate. But Florida isnt falling too far behind. In fact, its rate is only lower than California, Alaska, Idaho and Texas, according to AAMC.

The scope of power the Legislature would have over this is limited as the National Resident Matching Program places medical school graduates into residency programs based on their preferences and the types of candidates specific programs are looking for.

Read this article:
Florida is trying to catch up to the retention rate of medical residents in other states - Florida Phoenix

Penn’s medical school dean is being eyed as interim president of the university – The Philadelphia Inquirer

Update: J. Larry Jameson has been named Penns interim president. Heres what to know about him.

J. Larry Jameson, who has served as executive vice president of the University of Pennsylvanias health system and its medical school dean for more than 12 years, is emerging as a leading candidate to take the role of interim president of the university, according to multiple sources.

Jameson, 69, was referred to as an ideal candidate during a board of trustees meeting last week addressing Liz Magills resignation, a source had told The Inquirer. But at that time, the source said, no one had reached out to him yet.

Its not clear where the board is in the process to appoint an interim president.

The Universitys Board of Trustees is actively working to appoint an interim president, and that process is well underway, with a formal announcement anticipated in the coming days, the university said in a statement.

Well over half of Penn employees are connected to the health system and medical school collectively an $11.1 billion enterprise making it the largest single unit within the university. The Perelman School of Medicine employs more than 2,600 full-time faculty members and more than 3,700 students, trainees, residents, and fellows, according to Penns website.

READ MORE: Penn leadership upheaval could have a chilling effect on college presidencies and university operations nationally

Its unclear how long Jameson who earned a base salary of nearly $4.5 million in 2021 and additional compensation of $1.1 million, according to the most recent tax filing available would serve, or whether or when the university would launch a search for a new permanent president.

Other than to announce that Scott L. Boks replacement as board chair would be Julie Platt, the board has been mum since Magills departure Saturday night in the wake of congressional committee testimony on antisemitism that drew intense scrutiny.

READ MORE: Penn president Liz Magill has resigned following backlash over her testimony about antisemitism

Jameson will be taking over the presidency at perhaps the most tumultuous time for leadership in the universitys history. Magills less-than-18-month tenure was the shortest of any Penn president and followed presidents who served for 10 years and 18 years respectively.

He is already involved in fundraising and managing faculty and has developed a reputation as a strong and thoughtful leader, said several sources close to the administration.

Jameson, a molecular endocrinologist and native of Georgia, came to Penn from Northwestern University, where he had most recently served as dean of the medical school and vice president of medical affairs. He received his medical school degree at the University of North Carolina in 1981 and had worked at Harvard Medical School earlier in his career.

Jameson is the only dean to sit in on whats known as the discussion group at Penn, which is basically the presidents cabinet.

Earlier this year, Penn joined Harvard, Stanford, and Columbia in dropping out of U.S News & World Reports annual medical school rankings. In a memo to faculty, staff, and students, Jameson said the rankings relied too heavily on test scores and grades, while undervaluing qualities that Penn seeks.

We strive to identify and attract students with a wide array of characteristics that predict promise, Jameson wrote. The careers of transformative physicians, scientists, and leaders reveal the importance of other personal qualities, including creativity, passion, resilience, and empathy.

Under Jamesons leadership in 2022, Penns medical school formed a partnership with historically Black colleges and universities to attract more students from racial groups underrepresented in medicine.

He oversaw the medical school through the COVID-19 pandemic. After the success of the COVID vaccines made using mRNA technology pioneered at Penn, the medical school in late 2021 opened the Penn Institute for RNA Innovation, to expand the use of the genetic molecules for treating other conditions.

Penns health system is directly owned by the university, with clinical, educational, and research operations integrated under a management entity called Penn Medicine.

As executive vice president for the health system and dean of the Perelman School of Medicine, Jameson is in charge of the medical schools education and research components, but there is some distance between Jameson and the management of the health system and its six acute-care hospitals.

The University of Pennsylvania Health System has its own CEO, Kevin Mahoney, who reports to Jameson, along with the health systems chief financial officer Keith Kasper. The combination of Penns medical school and the health system, marketed as Penn Medicine, had $11.1 billion in revenue in the year ended June 30, 2022, mostly from the health system. That amounted to more than three-quarters of the universitys total revenue of $14.4 billion that year.

On Thursday, two days after Magills testimony in which she said it was a context-dependent decision when asked if calls for genocide of Jewish people would violate the schools code of conduct, Jameson and Mahoney put out a statement on genocide.

Calls for genocide, echoing horrors of the past, violate our behavioral standards and remind us that we must forcefully condemn, prevent, and respond to hate in all forms, said the Penn Med letter, according to the Daily Pennsylvanian, the student newspaper.

Staff writers Tom Avril and Harold Brubaker contributed to this article.

See the article here:
Penn's medical school dean is being eyed as interim president of the university - The Philadelphia Inquirer

Tulane University launches new nursing program | Tulane University News – Tulane University

Tulane University is launching a new nursing program through a collaboration between the Tulane School of Medicine (SoM) and the Tulane School of Professional Advancement (SoPA). The program, which will offer a Bachelor of Science in nursing, will help increase the states healthcare workforce at a time when the need for such healthcare heroes is critical.

Tulane was founded as a medical school devoted to eradicating yellow fever. We are still laser-focused on treatment and cures for the epidemics of our times, while ensuring we have the workforce available to provide comprehensive and accessible healthcare in New Orleans and throughout Louisiana, Tulane President Michael A. Fitts said. This program is part of Tulanes commitment to improve our city and regions health and economy by reimagining downtown New Orleans as a national hub for medical education and as a center of bioscience research and innovation.

In addition to caring for patients, registered nurses such as those who earn a Bachelor of Science in nursing degree also play key roles in health promotion, disease prevention, research, health policy planning, patient education, administration, leadership within healthcare facilities and more.

"Our program will prepare students to provide competent, team-based patient care and instill a deep commitment to community and community health outcomes,"

Fitts noted that nurses are more crucial than ever given the transformation American healthcare has undergone in recent years with patients more frequently receiving treatment across networks and systems and often through a distributed model that shifts care from a clinic or hospital setting to the patients home.

Tulane has been at the forefront of groundbreaking medical research and care for generations and recently partnered with the locally based health system LCMC Health to enhance the regions role as a destination for medical care, innovation and training. Through this partnership, Tulane Medical Center, Lakeview Hospital and Lakeside Hospital joined LCMC Health.

Tulanes nursing program will combine a rigorous curriculum with hands-on learning in state-of-the-art skills andsimulation labswhere students will practice technical skills and test their critical thinking and clinical judgment with real-time feedback. Students will complete hundreds of hours of education in clinical settings at Tulane's local hospital and healthcare partners.

This nursing program is an exciting addition to our academic offerings and reflects our deep commitment that each of our programs be of the highest quality, said Robin Forman, Senior Vice President for Academic Affairs and Provost at Tulane. "This program offers a uniquely Tulane experience that provides an exceptional preparation for the professional pursuits of our graduates.

Set to begin in fall 2024, the program will initially be housed in the Tulane University School of Medicines Murphy Building, 131 S. Robertson St., before moving to its permanent location at the renovated Tulane Medical Center building on Tulane Avenue. The program plans to enroll more than 200 students each year.

The timing for such a program couldnt be better, according to Dr. Lee Hamm, Tulanes Senior Vice President and Dean of the School of Medicine. Tulanes SoM has a proven track record of producing top physicians and other healthcare workers and ranks among the best nationwide.

Louisiana and the nation are in dire need of nurses, especially with the toll the pandemic took on the profession, Hamm said. The field of academic medicine is powered by the collaborative efforts of physicians, scientists and nurses. Implementing a Bachelor of Science in nursing program that culminates in a high-quality Tulane degree is an important milestone as we partner with LCMC Health to educate our workforce and care for the Gulf South community.

Suri Duitch, Dean of the Tulane University School of Professional Advancement, believes the program will be popular with students currently enrolled in SoPA as well as new applicants.

The Bachelor of Science in nursing program is ideal for students who want to find their calling in a field they love while contributing to something bigger than themselves, Duitch said. The hands-on nature of this program carries instant appeal for the students we attract and for those who will be drawn to this exciting and innovative program.

Brenda Douglas, a nationally recognized, board-certified registered nurse and nurse educator, will lead the program as Dean of Nursing. Douglas is developing an innovative, hybrid curriculum that will enable nursing students to learn alongside medical students, provide flexibility with asynchronous online learning, support the development of technical and clinical competence in a state-of-the-art skills laboratory and simulation center, and deliver care to patients in clinical and other settings.

Our program will prepare students to provide competent, team-based patient care and instill a deep commitment to community and community health outcomes that will be fostered by experiences in caring for our neighbors as part of their studies, Douglas said.

Douglas said students will get extensive academic support as they progress through the program, prepare to complete the Next Generation National Council Licensure Examination (NCLEX) and transition to practice.

The U.S. Bureau of Labor Statistics projects that themarket demandfor nurses will grow 9% through 2030, with approximately 194,500 openings each year, on average, over the next decade. In Louisiana, the state Board of Regents estimates there will be a shortfall of approximately 6,000 registered nurses 40% of the current workforce by 2030.

The Louisiana Board of Regents has created an ambitious master plan to double the number of nurses in the state during the next seven years. To help meet this goal, Tulanes program will put more nurses on the front lines of care in as little as 16 months.

For more information on registering for the nursing program, visitnursing.tulane.edu. There will be three program start dates per year in the spring, summer and fall. Prospective students must have at least 60 transferable credits to enter the program.

See the original post here:
Tulane University launches new nursing program | Tulane University News - Tulane University

Is it noble or selfish to never practice medicine after getting a medical degree? – Kevin MD

A Harvard medical school student realized in his third year that he had lost his desire to become a doctor. Nevertheless, the student decided to complete his fourth year and obtain his MD degree. The student is now planning for a career in pharma or even comedy. Some individuals who read his onlineessay found the students decision-making comical in itself. Overall, their comments were evenly divided about the students virtues and next moves.

Before exploring readers reactions, we ought to know something about this students reasons for opting out of the medical profession. The student wrote: Reflecting on the elements that brought me down, I felt sadness for my patients health, particularly when it seemed their condition could not be cured or treated effectively; disappointment over the influence of insurance coverage in determining which treatments patients received; frustration at the amount of documentation, which seemed to take precedence over time spent with patients; and discouraged by the overall environment where it seemed hospital personnel did not feel valued or happy to be there.

Lets not dwell on the merits of the students reasons but dive right into readers reactions to it, whether they shamed or commended him on his decision. I divided the comments into selfish and noble. Here is a sample:

Selfish

Noble

The comments do not provide a consensus on whether it is selfish or noble to never practice medicine after medical school. One commentator not the only one was able to see the argument philosophically from both sides, saying, Lets not shame people into staying where they deeply do not wish to be or condemn them based on good faith decisions made when they didnt fully understand what they were getting into.

I think this reader made many good points, so I decided to quote him entirely: The practical realities of clinical practice as a physician must be experienced to fully appreciate [them]. Pursuing and, if ultimately admitted, getting through medical school is something of a leap of faith for many. Sometimes it turns out to be a bad fit, a realization that may dawn after committing to a lot of debt. Of course, it rankles some given that accepting admission indirectly crowds someone else out (of this scarce resource) and doesnt provide the expected societal return on investment of a practicing clinician. On the other hand, do any of us want a physician who chronically doesnt want to be in that role? He may yet apply his education and degree profitably outside of clinical practice.

Many years ago, I conducted a smallstudy showing that over 90 percent of students who matriculated in two U.S. medical schools (Temple University and the University of Pennsylvania) graduated in four years. This percentage is in line with the Association of American Medical Colleges, which found that 4-year graduation rates ranged from 81.7 percent to 84.1 percent. Still, after six years, the average graduation rate was 96.0 percent.

However, asurvey released by the health science and journal publisher Elsevier in October 2023 found that a quarter of medical students in the U.S. were considering quitting their studies due to the pressure facing todays clinicians issues not unlike those cited by the Harvard grad.

Reasons for dropping out of medical school can be diverse, but common ones include academic struggles, financial pressures, personal health or family issues, lack of interest, or, in this instance, a desire to pursue a different career path. It is important to note that the majority of dropout causes are non-academic.

After leaving medical school, former students may pursue a range of alternate career paths. Some may choose to continue their education in a related field, such as public health, biomedical sciences, or health care administration. Others may decide to enter the workforce directly, taking jobs in health care, education, or research. Some will pursue careers distant from medicine or unrelated to it.

Perhaps this student will follow in the footsteps of the Monty Python actor Graham Chapman (1941 to 1989), who turned down a career as a doctor to be a writer and comedian. I wish this student well, and I do not begrudge him for (almost) forcing me to go to Mexico for medical school. Well, I never would have had a shot at Harvard anyway.

Arthur Lazarusis a formerDoximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia, PA. He is the author ofEvery Story Counts: Exploring Contemporary Practice Through Narrative Medicine.

Originally posted here:
Is it noble or selfish to never practice medicine after getting a medical degree? - Kevin MD

Howard University College of Medicine Announces $12 Million Gift from MacKenzie Scott – The Dig

WASHINGTON - The Howard University College of Medicine has received a $12 million donation from author and philanthropist MacKenzie Scott, part of the $2.2 billion in grants Scott has given this year to 360 organizations nationwide. The unrestricted nature of the gift, which is unusual for donors,allows Howard and the College of Medicine to determine how to make the greatest impact with these new resources.

The College of Medicine will apply the donation towards the establishment of a new innovations center in collaboration with the College of Engineering and Architecture, says College of Medicine Dean Andrea A. Hayes Dixon, M.D. The center will provide opportunities for medical and engineering students to learn about medical technology and subsequently create new devices with the potential of improving patient care.

The center will allow Howard University students, through the support of MacKenzie Scott, to be leaders in medical technology innovation, says Hayes Dixon. We intend to capitalize on the diverse knowledge base of our studentsknowledge that could change how medicine is practiced throughout the world.

Scott, a Princeton University graduate and former student of Howard alumna Toni Morrison,gifted Howard University $40 million in 2020, the largest donation from a single donor in school history.

My hope is that this gift will further solidify our College of Medicine as a world-class institution that attracts and retains future leaders in the field of medicine, said President Ben Vinson III, Ph.D. We are extremely grateful to Ms. Scott for her amazing generosity and know that this gift will only strengthen us, and ultimately, the future of healthcare as our students learn to provide care that improves outcomes for all patients.

###

About Howard University

Founded in 1867, Howard University is a private, research university that is comprised of 14 schools and colleges. Students pursue more than 140 programs of study leading to undergraduate, graduate and professional degrees. The University operates with a commitment to Excellence in Truth and Service and has produced two Schwarzman Scholars, four Marshall Scholars, four Rhodes Scholars, 12 Truman Scholars, 25 Pickering Fellows and more than 165 Fulbright recipients. Howard also produces more on-campus African American PhD. recipients than any other university in the United States. For more information on Howard University, visitwww.howard.edu.

Media Contact: Sholnn Freeman; sholnn.freeman@howard.edu

Here is the original post:
Howard University College of Medicine Announces $12 Million Gift from MacKenzie Scott - The Dig

Smoking causes brain shrinkage Washington University School of Medicine in St. Louis – Washington University School of Medicine in St. Louis

Visit the News Hub

Findings help explain how smoking is linked to Alzheimers, dementia

Smoking shrinks the brain and effectively causes premature brain aging, according to a study by researchers at Washington University School of Medicine in St. Louis. Quitting smoking prevents further loss of brain tissue but doesnt restore the brain to its original size.

Smoking shrinks the brain, according to a study by researchers at Washington University School of Medicine in St. Louis. The good news is that quitting smoking prevents further loss of brain tissue but still, stopping smoking doesnt restore the brain to its original size. Since peoples brains naturally lose volume with age, smoking effectively causes the brain to age prematurely, the researchers said.

The findings, published in Biological Psychiatry: Global Open Science, help explain why smokers are at high risk of age-related cognitive decline and Alzheimers disease.

Up until recently, scientists have overlooked the effects of smoking on the brain, in part because we were focused on all the terrible effects of smoking on the lungs and the heart, said senior author Laura J. Bierut, MD, the Alumni Endowed Professor of Psychiatry. But as weve started looking at the brain more closely, its become apparent that smoking is also really bad for your brain.

Scientists have long known that smoking and smaller brain volume are linked, but theyve never been sure which is the instigator. And there is a third factor to consider: genetics. Both brain size and smoking behavior are heritable. About half of a persons risk of smoking can be attributed to his or her genes.

To disentangle the relationship between genes, brains and behavior, Bierut and first author Yoonhoo Chang, a graduate student, analyzed data drawn from the UK Biobank, a publicly available biomedical database that contains genetic, health and behavioral information on half a million people, mostly of European descent. A subset of over 40,000 UK Biobank participants underwent brain imaging, which can be used to determine brain volume. In total, the team analyzed de-identified data on brain volume, smoking history and genetic risk for smoking for 32,094 people.

Each pair of factors proved to be linked: history of smoking and brain volume; genetic risk for smoking and history of smoking; and genetic risk for smoking and brain volume. Further, the association between smoking and brain volume depended on dose: The more packs a person smoked per day, the smaller his or her brain volume.

When all three factors were considered together, the association between genetic risk for smoking and brain volume disappeared, while the link between each of those and smoking behaviors remained. Using a statistical approach known as mediation analysis, the researchers determined the sequence of events: genetic predisposition leads to smoking, which leads to decreased brain volume.

It sounds bad, and it is bad, Bierut said. A reduction in brain volume is consistent with increased aging. This is important as our population gets older, because aging and smoking are both risk factors for dementia.

And unfortunately, the shrinkage seems to be irreversible. By analyzing data on people who had quit smoking years before, the researchers found that their brains remained permanently smaller than those of people who had never smoked.

You cant undo the damage that has already been done, but you can avoid causing further damage, Chang said. Smoking is a modifiable risk factor. Theres one thing you can change to stop aging your brain and putting yourself at increased risk of dementia, and thats to quit smoking.

Chang Y, Thornton V, Chaloemtoem A, Anokhin AP, Bijsterbosch J, Bogdan R, Hancock DB, Johnson EO, Bierut LJ. Investigating the relationship between smoking behavior and global brain volume. Biological Psychiatry Global Open Science. Dec. 11, 2023. DOI: 10.1016/j.bpsgos.2023.09.006

This work was supported by the National Institute on Alcohol Abuse and Alcoholism of the National Institutes of Health (NIH), grant numbers U10AA008401, R01AA027049 and R56AG058726; and the National Institute on Drug Abuse of the NIH, grant numbers K12DA041449 and R01DA044014. This content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute on Alcohol Abuse and Alcoholism, the National Institute on Drug Abuse or the NIH.

About Washington University School of Medicine

WashU Medicine is a global leader in academic medicine, including biomedical research, patient care and educational programs with 2,800 faculty. Its National Institutes of Health (NIH) research funding portfolio is the third largest among U.S. medical schools, has grown 52% in the last six years, and, together with institutional investment, WashU Medicine commits well over $1 billion annually to basic and clinical research innovation and training. Its faculty practice is consistently within the top five in the country, with more than 1,800 faculty physicians practicing at 65 locations and who are also the medical staffs of Barnes-Jewish and St. Louis Childrens hospitals of BJC HealthCare. WashU Medicine has a storied history in MD/PhD training, recently dedicated $100 million to scholarships and curriculum renewal for its medical students, and is home to top-notch training programs in every medical subspecialty as well as physical therapy, occupational therapy, and audiology and communications sciences.

Read this article:
Smoking causes brain shrinkage Washington University School of Medicine in St. Louis - Washington University School of Medicine in St. Louis

LMU’S Barbee receives minority scholarship – Claiborne Progress – Claiborne Progress

Published 2:53 pm Friday, December 15, 2023

NEWS RELEASE

Recognizing the achievements and passion of six inspiring student doctors, the American Association of Colleges of Osteopathic Medicine announced the recipients of the 2023 Sherry R. Arnstein Underrepresented Minority Scholarship this week. Lincoln Memorial University-DeBusk College of Osteopathic Medicine third-year medical student Cheyennae Barbee was among the honorees.

The scholarship was endowed by the Arnstein family to honor former AACOM Executive Director Sherry R. Arnsteins legacy and to help current and new osteopathic students from racial and ethnic minority backgrounds fund their education. In addition to Barbee, other recipients included Carley Andrew of Sam Houston State University College of Osteopathic Medicine, Jordan Howard of Philadelphia College of Osteopathic Medicine South Georgia, Adrian Mercado of Burrell College of Osteopathic Medicine, Alejandro Serru-Rivera of Idaho College of Osteopathic Medicine and Neriah Sosa of University of Incarnate Word School of Osteopathic Medicine.

We are very proud that these students represent the next generation of osteopathic physicians and will be the future doctors advancing our nations health care system, said AACOM President and CEO Robert A. Cain, DO. Our country is facing a crisis and is in desperate need of highly trained and skilled physicians, particularly doctors of color and those dedicated to practicing in underserved and rural areas. These student doctors are committed to osteopathic principles serving all communities, particularly those most in need. There is no better way to honor the legacy of Sherry Arnstein.

Barbee, a member of the LMU-DCOM Class of 2025, is currently in clinical rotations. She was born in rural Arkansas and lived there with her grandparents on and off. She has lived in 13 states, including 16 years in Culpeper, Virginia, another rural town. Barbee earned a Bachelor of Science in biology from Virginia Commonwealth University with minors in chemistry and Spanish. She went on to earn a Master of Science in biomedical sciences and research from Kansas City University in 2021.

Inspired by examples of family members working in health care, Barbee decided to pursue medicine. Im naturally curious and love learning so becoming a doctor satisfies my lifelong thirst for knowledge. I also want to inspire other Black people to take the road less traveled instead of the well-beaten path, she said.

When it came time to choose a medical school, Barbee only considered osteopathic schools because she liked the non-competitive atmosphere she experienced while attending Kansas City University for her graduate studies. She was drawn to LMU-DCOM in Harrogate due to its rural focus and location. I grew up in rural Virginia near the mountains and understand how dire the need for quality and accessible health care is for this specific population, Barbee said. LMU-DCOMs mission really resonated with me.

Recognizing not only the need for diverse physicians but also scientists and researchers representing underrepresented populations, Barbee has pursued opportunities to conduct research throughout her academic career. Last summer, she participated in Duke Universitys Office of Physician-Scientist Development Preparing Research scholars in bioMEdical sciences program where she worked on a research project entitled ITP antibodies mediate complement activation and platelet desialylation. Barbee worked under the direction of Duke University Medical Centers Dr. Gowthami Arepally, a hematologist and physician-scientist, who has become a mentor for Barbee.

Dr. Arepally has inspired me, and I aim to become a physician-scientist with a focus in classical hematology, Barbee said. I plan to work in academic medicine and spend 40% of my time in clinic (hopefully rural), 40% in the lab and 20% teaching.

Barbee will return to Duke next summer to continue research under Arepally.

The Sherry R. Arnstein Underrepresented Minority Student Scholarship was established in honor of Arnsteins lifelong dedication to public service, social equity and justice. After the initial endowment, AACOM continued funding the program, which has grown steadily since its inaugural grants were awarded. Since 2012, AACOM has awarded more than $250,000 to 66 recipients.

I would like to thank AACOM for awarding me the 2023 Arnstein Scholarship. This will help alleviate a lot of financial stress going into my last year of medical school, Barbee said. I also want to dedicate this award to poor Black kids with dreams bigger than the town they live in. You can, you must, and you will succeed because our community needs us!

Visit link:
LMU'S Barbee receives minority scholarship - Claiborne Progress - Claiborne Progress

Unveiling the Battles Within: Insights from Harvard Medical School Study – Medriva

Unveiling the Battles Within

The human body is an intricate network of cells, each playing a vital role in our overall health. However, these cells are also the battleground for a silent war that rages on every day: the battle between host and pathogen. A recent study by researchers at Harvard Medical School provides a detailed look into this molecular warfare, specifically in the context of the Herpes Simplex Virus (HSV) infection.

The research unveils the precise strategies utilized by both the host and the pathogen as they vie for cellular dominance. The findings offer significant insights into the mechanisms at play in preventing outbreaks of symptoms. Moreover, they could potentially lead to the development of treatments for HSV and other herpesviruses and nuclear DNA viruses.

A key player in the hosts defense strategy is a group of signaling proteins known as interferons. These proteins are essentially the bodys alarm system, alerting other protective molecules and blocking the virus from establishing an infection. Interferons, therefore, play a pivotal role in countering viral invasions, particularly within the cell nucleus.

The study also identified a host protein called IFI16, which is summoned by the interferon to help block the virus from reproducing. IFI16 employs several strategies to fend off the virus, one of which involves building and maintaining a protective shell of molecules around the viral DNA genome. In doing so, it prevents the activation of the viral DNA, thus inhibiting its reproduction.

Another defense mechanism employed by IFI16 is neutralizing the virus-produced molecules VP16 and ICP0. The research shows that interferon signals are crucial in recruiting higher levels of IFI16, tipping the balance in favor of the immune system in this ongoing arms race.

The insights gained from this study have far-reaching implications, potentially paving the way for targeted treatments for HSV and other DNA viruses. This includes well-known troublemakers like the Epstein Barr virus, which causes mononucleosis, human papillomavirus, hepatitis B, and smallpox.

Understanding how the immune system fights to keep viruses at bay is crucial in our pursuit of developing effective treatments. The battle between the host immune system and herpes simplex virus at the cellular level has long intrigued scientists. With this recent research from Harvard Medical School, we are beginning to unravel the mysteries of this molecular warfare, bringing us one step closer to winning the battle against HSV and other similar viruses.

More:
Unveiling the Battles Within: Insights from Harvard Medical School Study - Medriva

Chinese scientists develop powerful hydrogen therapy that could reverse ageing – South China Morning Post

Using nanotechnology, the team has developed a scaffold implant that delivers hydrogen 40,000 times more efficiently than other methods such as drinking hydrogen-rich water or inhaling hydrogen gas.

According to the paper, the implant can deliver a slow and sustained release of hydrogen for up to a week, compared to the 30-minute limit on hydrogen-rich water. The study found the prolonged treatment helped repair bone defects in older mice.

37:19

Chinas elderly are heading to retirement, heres why thats a problem

Chinas elderly are heading to retirement, heres why thats a problem

Hydrogen acts as an anti-inflammatory agent with the ability to scavenge the toxic radicals associated with ageing.

It has been found to have a universal anti-senescence impact on various cells and tissues, meaning that it helps them continue to replicate and grow.

Corresponding author He Qianjun, from Shanghai Jiao Tong University, told the South China Morning Post the scaffold was developed to repair bone defects in the elderly, but could one day be used for other age-related conditions and diseases, including Alzheimers.

We developed [the method] mainly based on our discovery of the broad-spectrum anti-ageing properties of hydrogen, He said, in an email to the Post.

Chinese nanoplatform aircraft carrier delivers drugs to cancer patients

The scaffold had a significant effect in inducing bone growth compared to the blank scaffold that does not produce hydrogen, He said.

Senescence the gradual deterioration of bodily function as we age is one of the major causes of age-related conditions and diseases. In recent years, links have also been identified between cellular senescence and these conditions.

At a cellular level, senescence can be responsible for DNA damage and the loss of cell cycle functions like division and growth.

Cells can also secrete materials that cause inflammation, creating a senescence micro-environment that causes tissues and bones to decline in their ability to self-repair, according to the paper.

This persistent inflammation and loss of regenerative ability is a main obstacle to effective tissue repair for elderly people, the researchers said.

Existing anti-senescence treatments are unable to universally regulate the entire micro-environment, the paper said.

But the researchers found that hydrogen is able to alter the senescence micro-environment from pro-inflammation to anti-inflammation, supporting bone defect regeneration.

According to He, the hydrogen is able to remodel the senescence micro-environment during the early stage of inflammation and have a continuing effect on later bone repair.

Chinese scientists find a way to restore bodys cancer-fighting cells

The implantable scaffold is biosafe, using products like metasilicate and calcium ions as well as hydrogen gas, he said.

The researchers created the scaffold with calcium disilicide nanoparticles hydrolysed to store hydrogen sprayed on to porous, bioactive glass and wrapped in a biodegradable polymer to stop it degrading and releasing the hydrogen too quickly.

The device was tested on 24-month-old mice equivalent to 70 in humans that had femoral bone defects.

The scaffold was able to release hydrogen for seven to nine days, a duration not reported for any other method, the paper said.

Hong Kong university builds case for using antidiabetic drug for anti-ageing

The teams next challenge is to create a scaffold with an even longer period of release a development that could be even better for repair, according to He.

While more research is needed, further development of high-performance materials to deliver hydrogen is crucial.

We believe that continuous hydrogen supply will be a universal anti-ageing technology that can treat various ageing-related diseases, including preventing and treating diseases like Alzheimers, He said.

Follow this link:
Chinese scientists develop powerful hydrogen therapy that could reverse ageing - South China Morning Post

AI, poverty, hunters and hellos among our top global topics of the year : Goats and Soda – NPR

Images from some of our most popular global stories of 2023 (left to right): A woman from Brazil's Awa people holds her bow and arrow after a hunt; an artificial intelligence program made this fake photo to fulfill a request for "doctors help children in Africa" AI added the giraffe; researchers are learning that a stranger's hello can do more than just brighten your day. Scott Wallace/Getty Images, Midjourney Bot Version 5.1. Annotation by NPR, David Rowland/AP hide caption

Images from some of our most popular global stories of 2023 (left to right): A woman from Brazil's Awa people holds her bow and arrow after a hunt; an artificial intelligence program made this fake photo to fulfill a request for "doctors help children in Africa" AI added the giraffe; researchers are learning that a stranger's hello can do more than just brighten your day.

We did a lot of coverage of viruses this year (see this post) but other stories went viral as well.

The post with the most pageviews tackled a diverse array of topics. New research upends hunter/gatherer gender stereotypes. Preliminary results from a study in Kenya on how to help peope who are poor show the power of handing over cash and a lump sum seems more effective than a monthly payout. And psychologists are finding that when a stranger gives a greeting, it's not just an empty gesture.

Here are our most popular stories (not about viruses) from 2023.

It's one of the biggest experiments in fighting global poverty. Now the results are in

The study focuses on a universal basic income and spans 12 years and thousands of people in Kenya. How did the money change lives? What's better: monthly payouts or a lump sum. Published December 7, 2023.

Men are hunters, women are gatherers. That was the assumption. A new study upends it

For decades, scientists have believed that early humans had a division of labor: Men generally did the hunting and women did the gathering. And this view hasn't been limited to academics. Now a new study suggests the vision of early men as the exclusive hunters is simply wrong and that evidence that early women were also hunting has been there all along. July 1, 2023.

It's one of the world's toughest anti-smoking laws. The Mori see a major flaw

New Zealand has declared war on tobacco with a remarkable new law. The indigenous Mori population, with the country's highest smoking rate, has a lot to gain. But they have a bone of contention. October 1, 2023. (Editor's note: New Zealand's new conservative government has vowed to repeal the anti-smoking law; we covered that development as well.)

Why a stranger's hello can do more than just brighten your day

Just saying "hello" to a passerby can be a boon for both of you. As researchers explore the impact of interactions with strangers and casual acquaintances, they're shedding light on how seemingly fleeting conversations affect your happiness and well-being. August 23, 2023.

AI was asked to create images of Black African docs treating white kids. How'd it go?

Researchers were curious if artificial intelligence could fulfill the order. Or would built-in biases short-circuit the request? Let's see what an image generator came up with. October 6, 2023.

MacKenzie Scott is shaking up philanthropy's traditions. Is that a good thing?

On December 14, 2022 billionaire philanthropist and novelist MacKenzie Scott announced that her donations since 2019 have totaled more than $14 billion and helped fund around 1,600 nonprofits. But as much as the scale, it is the style of giving that is causing a stir; it's targeted at a wide spectrum of causes, without a formal application process and it appears no strings attatched. January 10, 2023.

This is not a joke: Chinese people are eating and poking fun at #whitepeoplefood

The playful term is trending on social media: Urban workers are embracing (even while joking about) easy-to-fix, healthy Western-style lunches think sandwiches, veggies ... a lonely baked potato. July 10, 2023.

Read the original:

AI, poverty, hunters and hellos among our top global topics of the year : Goats and Soda - NPR

Artificial Intelligence in Natural Hazard Modeling: Severe Storms, Hurricanes, Floods, and Wildfires – Government Accountability Office

What GAO Found

GAO found that machine learning, a type of artificial intelligence (AI) that uses algorithms to identify patterns in information, is being applied to forecasting models for natural hazards such as severe storms, hurricanes, floods, and wildfires, which can lead to natural disasters. A few machine learning models are used operationallyin routine forecastingsuch as one that may improve the warning time for severe storms. Some uses of machine learning are considered close to operational, while others require years of development and testing.

GAO identified potential benefits of applying machine learning to this field, including:

Forecasting natural disasters using machine learning

GAO also identified challenges to the use of machine learning. For example:

GAO identified five policy options that could help address these challenges. These options are intended to inform policymakers, including Congress, federal and state agencies, academic and research institutions, and industry of potential policy implementations. The status quo option illustrates a scenario in which government policymakers take no additional actions beyond current ongoing efforts.

Policy Options to Help Address Challenges to the Use of Machine Learning in Natural Hazard Modeling

Government policymakers could expand use of existing observational data and infrastructure to close gaps, expand access to certain data, and (in conjunction with other policymakers) establish guidelines for making data AI-ready.

Government policymakers could update education requirements to include machine learning-related coursework and expand learning and support centers, while academic policymakers could adjust physical science curricula to include more machine learning coursework.

Government policymakers could address pay scale limitations for positions that include machine learning expertise and work with private sector policymakers to expand the use of public-private partnerships (PPP).

Policymakers could establish efforts to better understand and mitigate various forms of bias, support inclusion of diverse stakeholders for machine learning models, and develop guidelines or best practices for reporting methodological choices.

Government policymakers could maintain existing policy efforts and organizational structures, along with existing strategic plans and agency commitments.

Source: GAO. | GAO-24-106213

Natural disasters cause on average hundreds of deaths and billions of dollars in damage in the U.S. each year. Forecasting natural disasters relies on computer modeling and is important for preparedness and response, which can in turn save lives and protect property. AI is a powerful tool that can automate processes, rapidly analyze massive data sets, enable modelers to gain new insights, and boost efficiency.

This report on the use of machine learning in natural hazard modeling discusses (1) the emerging and current use of machine learning for modeling severe storms, hurricanes, floods, and wildfires, and the potential benefits of this use; (2) challenges surrounding the use of machine learning; and (3) policy options to address challenges or enhance benefits of the use of machine learning.

GAO reviewed the use of machine learning to model severe storms, hurricanes, floods, and wildfires across development and operational stages; interviewed a range of stakeholder groups, including government, industry, academia, and professional organizations; convened a meeting of experts in conjunction with the National Academies; and reviewed key reports and scientific literature. GAO is identifying policy options in this report.

For more information, contact Brian Bothwell at (202) 512-6888 or bothwellb@gao.gov.

See the article here:

Artificial Intelligence in Natural Hazard Modeling: Severe Storms, Hurricanes, Floods, and Wildfires - Government Accountability Office

Artificial Intelligence: Actions Needed to Improve DOD’s Workforce Management – Government Accountability Office

Fast Facts

The Department of Defense has invested billions of dollars to integrate artificial intelligence into its operations. This includes analyzing intelligence, surveillance, and reconnaissance data, and operating deadly autonomous weapon systems.

We found, however, that DOD can't fully identify who is part of its AI workforce or which positions require personnel with AI skills. As a result, DOD can't effectively assess the state of its AI workforce or forecast future AI workforce needs.

We made 3 recommendations, including that DOD establish a timeline for completing the steps needed to define and identify its AI workforce.

The Department of Defense (DOD) typically establishes standard definitions of its workforces to make decisions about which personnel are to be included in that workforce, and identifies its workforces by coding them in its data systems. DOD has taken steps to begin to identify its artificial intelligence (AI) workforce, but has not assigned responsibility and does not have a timeline for completing additional steps to fully define and identify this workforce. DOD developed AI work rolesthe specialized sets of tasks and functions requiring specific knowledge, skills, and abilities. DOD also identified some military and civilian occupations, such as computer scientists, that conduct AI work. However, DOD has not assigned responsibility to the organizations necessary to complete the additional steps required to define and identify its AI workforce, such as coding the work roles in various workforce data systems, developing a qualification program, and updating workforce guidance. DOD also does not have a timeline for completing these additional steps. Assigning responsibility and establishing a timeline for completion of the additional steps would enable DOD to more effectively assess the state of its AI workforce and be better prepared to forecast future workforce requirements (see figure).

Questions DOD Cannot Answer Until It Fully Defines and Identifies Its AI Workforce

DOD's plans and strategies address some AI workforce issues, but are not fully consistent with each other. Federal regulation and guidance state that an agency's Human Capital Operating Plan should support the execution of its Strategic Plan. However, DOD's Human Capital Operating Plan does not consistently address the human capital implementation actions for AI workforce issues described in DOD's Strategic Plan. DOD also uses inconsistent terms when addressing AI workforce issues, which could hinder a shared understanding within DOD. The military services are also developing component-level human capital plans that encompass AI and will cascade from the higher-level plans. Updating DOD's Human Capital Operating Plan to be consistent with other strategic documents would better guide DOD components' planning efforts and support actions necessary for achieving the department's strategic goals and objectives related to its AI workforce.

DOD has invested billions of dollars to integrate AI into its warfighting operations. This includes analyzing intelligence, surveillance, and reconnaissance data, and operating lethal autonomous weapon systems. DOD identified cultivating a workforce with AI expertise as a strategic focus area in 2018. However, in 2021 the National Security Commission on Artificial Intelligence concluded that DOD's AI talent deficit is one of the greatest impediments to the U.S. being AI-ready by the Commission's target date of 2025.

House Report 117-118, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2022, includes a provision for GAO to review DOD's AI workforce. This report evaluates the extent to which DOD has (1) defined and identified its AI workforce and (2) established plans and strategies to address AI workforce issues, among other objectives. GAO assessed DOD strategies and plans, reviewed laws and guidance that outline requirements for managing an AI workforce, and interviewed officials.

Read more from the original source:

Artificial Intelligence: Actions Needed to Improve DOD's Workforce Management - Government Accountability Office

Why AI struggles to predict the future : Short Wave – NPR

Muharrem Huner/Getty Images

Muharrem Huner/Getty Images

Artificial intelligence is increasingly being used to predict the future. Banks use it to predict whether customers will pay back a loan, hospitals use it to predict which patients are at greatest risk of disease and auto insurance companies use it to determine insurance rates by predicting how likely a customer is to get in an accident.

"Algorithms have been claimed to be these silver bullets, which can solve a lot of societal problems," says Sayash Kapoor, a researcher and PhD candidate at Princeton University's Center for Information Technology Policy. "And so it might not even seem like it's possible that algorithms can go so horribly awry when they're deployed in the real world."

But they do.

Issues like data leakage and sampling bias can cause AI to give faulty predictions, to sometimes disastrous effects.

Kapoor points to high stakes examples: One algorithm falsely accused tens of thousands of Dutch parents of fraud; another purportedly predicted which hospital patients were at high risk of sepsis, but was prone to raising false alarms and missing cases.

After digging through tens of thousands of lines of machine learning code in journal articles, he's found examples abound in scientific research as well.

"We've seen this happen across fields in hundreds of papers," he says. "Often, machine learning is enough to publish a paper, but that paper does not often translate to better real world advances in scientific fields."

Kapoor is co-writing a blog and book project called AI Snake Oil.

Want to hear more of the latest research on AI? Email us at shortwave@npr.org we might answer your question on a future episode!

Listen to Short Wave on Spotify, Apple Podcasts and Google Podcasts.

This episode was produced by Berly McCoy and edited by Rebecca Ramirez. Brit Hanson checked the facts. Maggie Luthar was the audio engineer.

Read the rest here:

Why AI struggles to predict the future : Short Wave - NPR

How the EU AI Act regulates artificial intelligence: What it means for cybersecurity – CSO Online

According to van der Veer, organizations that fall into the categories above need to do a cybersecurity risk assessment. They must then adhere to the standards set by either the AI Act or the Cyber Resilience Act, the latter being more focused on products in general. That either-or situation could backfire. People will, of course, choose the act with less requirements, and I think thats weird, he says. I think its problematic.

When it comes to high-risk systems, the document stresses the need for robust cybersecurity measures. It advocates for the implementation of sophisticated security features to safeguard against potential attacks.

Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behavior, performance or compromise their security properties by malicious third parties exploiting the systems vulnerabilities, the document reads. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g., data poisoning) or trained models (e.g., adversarial attacks), or exploit vulnerabilities in the AI systems digital assets or the underlying ICT infrastructure. In this context, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.

The AI Act has a few other paragraphs that zoom in on cybersecurity, the most important ones being those included in Article 15. This article states that high-risk AI systems must adhere to the security by design and by default principle, and they should perform consistently throughout their lifecycle. The document also adds that compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application.

The same article talks about the measures that could be taken to protect against attacks. It says that the technical solutions to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve, and control for attacks trying to manipulate the training dataset (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws, which could lead to harmful decision-making.

What the AI Act is saying is that if youre building a high-risk system of any kind, you need to take into account the cybersecurity implications, some of which might have to be dealt with as part of our AI system design, says Dr. Shrishak. Others could actually be tackled more from a holistic system point of view.

According to Dr. Shrishak, the AI Act does not create new obligations for organizations that are already taking security seriously and are compliant.

Organizations need to be aware of the risk category they fall into and the tools they use. They must have a thorough knowledge of the applications they work with and the AI tools they develop in-house. A lot of times, leadership or the legal side of the house doesnt even know what the developers are building, Thacker says. I think for small and medium enterprises, its going to be pretty tough.

Thacker advises startups that create products for the high-risk category to recruit experts to manage regulatory compliance as soon as possible. Having the right people on board could prevent situations in which an organization believes regulations apply to it, but they dont, or the other way around.

If a company is new to the AI field and it has no experience with security, it might have the false impression that just checking for things like data poisoning or adversarial examples might satisfy all the security requirements, which is false. Thats probably one thing where perhaps somewhere the legal text could have done a bit better, says Dr. Shrishak. It should have made it more clear that these are just basic requirements and that companies should think about compliance in a much broader way.

The AI Act can be a step in the right direction, but having rules for AI is one thing. Properly enforcing them is another. If a regulator cannot enforce them, then as a company, I dont really need to follow anything - its just a piece of paper, says Dr. Shrishak.

In the EU, the situation is complex. A research paper published in 2021 by the members of the Robotics and AI Law Society suggested that the enforcement mechanisms considered for the AI Act might not be sufficient. The experience with the GDPR shows that overreliance on enforcement by national authorities leads to very different levels of protection across the EU due to different resources of authorities, but also due to different views as to when and how (often) to take actions, the paper reads.

Thacker also believes that the enforcement is probably going to lag behind by a lot for multiple reasons. First, there could be miscommunication between different governmental bodies. Second, there might not be enough people who understand both AI and legislation. Despite these challenges, proactive efforts and cross-disciplinary education could bridge these gaps not just in Europe, but in other places that aim to set rules for AI.

Striking a balance between regulating AI and promoting innovation is a delicate task. In the EU, there have been intense conversations on how far to push these rules. French President Emmanuel Macron, for instance, argued that European tech companies might be at a disadvantage in comparison to their competitors in the US or China.

Traditionally, the EU regulated technology proactively, while the US encouraged creativity, thinking that rules could be set a bit later. I think there are arguments on both sides in terms of what ones right or wrong, says Derek Holt, CEO of Digital.ai. We need to foster innovation, but to do it in a way that is secure and safe.

In the years ahead, governments will tend to favor one approach or another, learn from each other, make mistakes, fix them, and then correct course. Not regulating AI is not an option, says Dr. Shrishak. He argues that doing this would harm both citizens and the tech world.

The AI Act, along with initiatives like US President Bidens executive order on artificial intelligence, are igniting a crucial debate for our generation. Regulating AI is not only about shaping a technology. It is about making sure this technology aligns with the values that underpin our society.

Link:

How the EU AI Act regulates artificial intelligence: What it means for cybersecurity - CSO Online

This year in privacy: Wins and losses around the world | Context – Context

Whats the context?

New laws around the world boosted privacy protections, but enforcement is key, and concerns around AI's impact are growing

This was something of a watershed year for privacy, with key legislations introduced from California to China, and heated debates around what the rapid advance of generative artificial intelligence means for individual privacy rights.

While world leaders agreed at the inaugural AI Safety Summit in Britain to identify and mitigate risks, including to consumer privacy, data breaches exposing personal data were reported at the UK Electoral Commission, genetics company 23andMe, Indian hospitals and elsewhere.

"2023 was a consistently mixed bag built on incredibly positive foundations: there are oversight bodies and policy-makers doing their jobs to hold bad actors to account at levels we have never seen before," said Gus Hosein, executive director at advocacy group Privacy International.

"Looking forward, governments can either act to create safeguards, or they can see the digital world start burning around them with rampant state-sponsored hacking, unaccountable automated decision making (and) deepening powers for Big Tech," he told Context.

"One huge question is this: where do Large Learning Models get their data from tomorrow? I'm worried it will be about getting it from people in ways beyond our control as consumers and citizens."

These are the year's most consequential privacy milestones, and what they mean for digital rights:

The sweeping Digital Services Act went into effect on Aug. 25, imposing new rules on user privacy on the largest online platforms, including banning or limiting of some user-targeting practices and imposing stiff penalties for any violations.

The EU's success in implementing this and other tech laws such as the Digital Markets Act, could influence similar rules elsewhere around the world, much like the General Data Protection Regulation (GDPR) did, tech experts say.

But enforcement is a challenge, with any infringement procedure against a company dependent on external reports that must be done at least once a year by independent auditing organisations. These audits aren't due until August 2024.

The UK parliament in September passed the Online Safety Bill, which aims to make the UK "the safest place" in the world to be online.

But digital rights groups say the bill could undermine the privacy of users everywhere, as it forces companies to build technology that can scan all users for child abuse content - including messages that are end-to-end encrypted.

Moreover, the bill's age-verification system meant to protect kids will "invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary," noted the Electronic Frontier Foundation.

India passed a long-delayed data protection law in August, which digital rights experts quickly denounced as privacy-damaging and hurting rather than protecting fundamental rights.

The law "grants unchecked powers to the government, including on censorship and surveillance, while jeopardising the rights to information and free speech," noted digital rights group Access Now.

"It's a bad law ... the Data Protection Board lacks independence from the government, which is among the largest data miners (and) people whose privacy has been breached are not entitled to compensation, and are threatened with penalties," said Namrata Maheshwari, Access Now's policy counsel in Asia.

On Oct. 31, China's most popular social media sites - including microblogging platform Weibo, super app WeChat, Chinese TikTok Douyin and search engine Baidu - announced that so-called self-media accounts with more than 500,000 followers will be required to display real-name information.

Self-media includes news and information not necessarily approved by the government, and the new measures will remove the anonymity of thousands of influencers on platforms that are used daily by hundreds of millions of Chinese.

Users have expressed concerns about privacy violations, doxxing and harassment, and greater state surveillance, and several bloggers have quit the platforms. Authorities in Vietnam said they are considering similar rules.

California Governor Gavin Newsom in October signed the Delete Act, which enables Californians to either ask data brokers to delete their personal data, or forbid them from selling or sharing it, with a single request.

"It helps us gain better control over our data and makes it easier to mitigate the risks that the collection and sale of personal information create in our everyday lives," the Electronic Frontier Foundation said.

But a federal judge in September blocked enforcement of the California Age-Appropriate Design Code that was seen as a major win for privacy protections and safety for children online when it was passed last year.

The Chilean Supreme Court in August issued a ruling ordering Emotiv, a U.S. producer of a commercial brain scanning tool, to erase the data it had collected on a former Chilean senator, Guido Girardi.

The ruling - the first of its kind - puts Latin America at the forefront of a new race to protect the brain from machine mining and exploitation, with countries including Brazil, Mexico and Uruguay considering similar provisions.

"It is a significant victory for privacy advocates and sets a precedent for the protection of neural data around the world through the explicit establishment and protection of neurorights," the NeuroRights Foundation, a U.S.-based advocacy group, said.

(Reporting by Rina Chandran. Editing by Zoe Tabary)

See the original post:

This year in privacy: Wins and losses around the world | Context - Context

Artificial intelligence could ‘revolutionise’ chemistry but researchers warn of hype – Chemistry World

Artificial Intelligence can revolutionise science by making it faster, more efficient and more accurate, according to a survey of European Research Council (ERC) grant winners. And while the report looks at the impact of AI on all scientific fields, the field of chemistry, in particular, can be expected to benefit greatly from the revolution, say researchers. But there are also warnings that AI is being overhyped, and avowals of the importance of human experts in chemical research.

The ERC report summarises how 300 researchers are using AI in their work, and what they see as its potential impacts and risks by 2030. Researchers in the physical sciences report that AI has become essential for data analysis, and for working on advanced simulations. They also note the applications of AI systems to perform calculations, operate instruments and control complex systems.

But they warn AI could spread false or inaccurate information, and that it might have a harmful impact on research integrity if researchers overuse AI tools to write research papers. They also express concerns about AIs lack of transparency and scientific replicability: AI was likened to a black box which could generate results without any underlying understanding of them.

Princeton Universitys Michael Skinnider, who uses machine learning to identify molecules with mass-spectrometry, says AIs greatest advances will be in analysing data, rather than the use of AI tools like large language models as aids for writing and researching. As well as extracting value from large datasets, AI would allow scientists to collect even larger datasets through more complex and ambitious experiments, with the expectation that we will be able to sift through huge amounts of data to ultimately arrive at new biological insights, he says.

Its a view also held by Tim Albrecht at the University of Birmingham, who adds that the latest AI systems can determine through training what features they should look for in data, as well as simply finding data features that theyve been pre-programmed for.

Gonalo Bernardes of Cambridge University, who has used AI methods to optimise organic reactions, stresses that AI can also usefully analyse small data sets. I believe its true power comes when dealing with small datasets and being able to inform on specific questions, [such as] what are the best conditions for a given reaction, he says.

And Simon Woodward of the University of Nottingham notes the ability of AI to inspire intuitive guesses. We have found the latest generations of message-passing neural networks show the highest potential for such approaches in catalysis, he says.

Chemist Keith Butler at University College London specialises in using AI systems to design new materials. He agrees that AI will create major changes in chemical research, but says they cant replace expert humans. There has been a lot of talk about self-driving autonomous labs lately, but I think that fully closed-loop labs are likely to be limited to specialist processes, he says. One could argue that scientific research is often advanced by edge-cases, so full automation is hard to imagine.

Butler makes an analogy between AI chemistry and self-driving cars. While AI has not led to fully autonomous vehicles, if you drive a car produced today compared to a car produced 15 years ago you will see just how much AI can change the way we operate: sat nav, parking guidance, sensors and indicators for all sorts of performance, he says. I already see significant impact of AI and in particular machine learning in the chemical sciences but in all cases human experts checking and guiding the process is critical.

Princetons Skinnider adds that he is less convinced of the potential for AI to replace higher-level thinking, such as AI for scientific discovery or generating new scientific hypotheses two hyped aspects of AI touched on in the ERC report. Isnt there some amount of joy inherent in these processes that motivates people to become scientists in the first place?

Read this article:

Artificial intelligence could 'revolutionise' chemistry but researchers warn of hype - Chemistry World

Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements – Government Accountability Office

Office of Management and Budget The Director of OMB should ensure that the agency issues guidance to federal agencies in accordance with federal law, that is to (a) inform the agencies' policy development related to the acquisition and use of technologies enabled by AI, (b) include identifying responsible AI officials (RAIO), (c) recommend approaches to remove barriers for AI use, (d) identify best practices for addressing discriminatory impact on the basis of any classification protected under federal nondiscrimination laws, and (e) provide a template for agency plans that includes the required contents. (Recommendation 1) Office of Management and Budget The Director of OMB should ensure that the agency develops and posts a public roadmap for the agency's policy guidance to better support AI use, and, where appropriate, include a schedule for engaging with the public and timelines for finalizing relevant policy guidance, consistent with EO 13960. (Recommendation 2) Office of Science and Technology Policy The Director of the Office of Science and Technology Policy should communicate a list of federal agencies that are required to implement the Regulation of AI Applications memorandum requirements (M-21-06) to inform agencies of their status as implementing agencies with regulatory authorities over AI. (Recommendation 3) Office of Personnel Management The Director of OPM should ensure that the agency (a) establishes or updates and improves an existing occupational series with AI-related positions; (b) establishes an estimated number of AI-related positions, by federal agency; and, based on the estimate, (c) prepares a 2-year and 5-year forecast of the number of federal employees in these positions, in accordance with federal law. (Recommendation 4) Office of Personnel Management The Director of OPM should ensure that the agency creates an inventory of federal rotational programs and determines how these programs can be used to expand the number of federal employees with AI expertise, consistent with EO 13960. (Recommendation 5) Office of Personnel Management The Director of OPM should ensure that the agency issues a report with recommendations for how the programs in the inventory can be used to expand the number of federal employees with AI expertise and shares it with the interagency coordination bodies identified by the Chief Information Officers Council, consistent with EO 13960. (Recommendation 6) Office of Personnel Management The Director of OPM should ensure that the agency develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 7) Department of Agriculture The Secretary of Agriculture should ensure that the department (a) reviews the department's authorities related to applications of AI, and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 8) Department of Agriculture The Secretary of Agriculture should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 9) Department of Commerce The Secretary of Commerce should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 10) Department of Commerce The Secretary of Commerce should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 11) Department of Education The Secretary of Education should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 12) Department of Energy The Secretary of Energy should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 13) Department of Health and Human Services The Secretary of Health and Human Services should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 14) Department of Health and Human Services The Secretary of Health and Human Services should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 15) Department of Homeland Security The Secretary of Homeland Security should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 16) Department of Homeland Security The Secretary of Homeland Security should ensure that the department (a) reviews the department's authorities related to applications of AI and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 17) Department of Homeland Security The Secretary of Homeland Security should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 18) Department of the Interior The Secretary of the Interior should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 19) Department of the Interior The Secretary of the Interior should ensure that the department (a) reviews the agency's authorities related to applications of AI and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 20) Department of the Interior The Secretary of the Interior should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 21) Department of Labor The Secretary of Labor should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 22) Department of State The Secretary of State should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 23) Department of Transportation The Secretary of Transportation should ensure that the department (a) reviews the department's authorities related to applications of AI and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 24) Department of Transportation The Secretary of Transportation should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 25) Department of the Treasury The Secretary of the Treasury should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 26) Department of the Treasury The Secretary of the Treasury should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 27) Department of Veterans Affairs The Secretary of Veterans Affairs should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 28) Environmental Protection Agency The Administrator of the Environmental Protection Agency should ensure that the agency fully completes and approves its plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 29) Environmental Protection Agency The Administrator of the Environmental Protection Agency should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 30) General Services Administration The Administrator of General Services should ensure that the agency develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 31) General Services Administration The Administrator of General Services should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 32) National Aeronautics and Space Administration The Administrator of the National Aeronautics and Space Administration should ensure that the agency updates and approves the agency's plan to achieve consistency with EO 13960 section 5 for each AI application, to include retiring AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 33) National Aeronautics and Space Administration The Administrator of the National Aeronautics and Space Administration should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 34) U.S. Agency for International Development The Administrator of the U.S. Agency for International Development should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 35)

Excerpt from:

Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements - Government Accountability Office

El Camino using artificial intelligence audio recorder in classrooms to aid disabled students – El Camino College Union

In an age where technology develops seemingly every day, El Camino College has been utilizing certain artificial intelligence programs to help students with disabilities get their proper education.

Otter.ai is an AI program that audio records conversations in real-time, automatically transcribes audio into a written text and can even help generate short summaries of longer texts.

For 40-year-old business student Clay Grant, the use of this program has improved his academic career at El Camino.

Before his recent enrollment at El Camino, Grant worked as a Deputy Sheriff at the Los Angeles County Sheriffs Department for close to 15 years.

After suffering from a stroke in 2021, Grant had difficulty with reading, spelling and memorization skills. Despite this, he returned to school.

Although Grant has certain limitations in the classroom he says that the transcription program serves as an assistance aid, it is not a necessity for him or his grades.

I didnt struggle [in class], it was more of an enhancement, Grant said.

Grant liked how easy the program is to navigate and he said that it helped him retain even more information. Due to the benefit of using Otter in class, Grant believes that it should be expanded outside of students with disabilities.

I would say anybody, even if they dont have a disability, should use [Otter], Grant said.

The El Camino Special Resource Center has an agreement with Otter.ai that allows the college to give qualifying students licenses for the program. This allows qualifying students to use the transcription program in the classroom.

While Otter offers a free version of their services, premium features require a monthly fee. Individual pricing starts at $10 a month.

The Special Resource Center services around 1,000 students and has 100 licenses available from Otter; although only 60 to 80 are used per semester.

To receive a license for Otter, students must go through a process.

It begins with proving ones disability and a consultation that decides whether the student qualifies or not.

If a student qualifies for a license, they then speak with their teachers and come to an agreement about recording in the classroom that works for both of them.

Once this happens, the student signs a contract highlighting what is and is not allowed to do with the program. Certain stipulations must be followed by using the program in an in-person classroom setting.

They are then taught how to use Otter by Brian Krause, the Special Resource Center assistive computer technology specialist.

Roles and responsibilities indicate that the student cannot share the recording with others and its to be used in the context of the classroom and educational use, Bonnie Mercado, special resource center supervisor said.

Although there are licenses available, not all students need one.

Medical documentation will go ahead and indicate the level of need, so its not cookie cutter, its all very individualized, Mercado said.

Although not every license is put to use, there are hopes of increasing usage and possibly the number of licenses as well.

If we need to buy more, we will as we increase [the number of licenses] Krause said.

The Special Resource Center has been using Otter for two semesters and theyve received a lot of positive feedback.

The interface is clean, sleek, and, again, a lot more user-friendly, Mercado said. Trying to keep the students in mind with regards to easy use.

Krause attended the technology conference for persons with disabilities held each year by California State University Northridge.

This is where everybody goes with the latest technology, sharing information. So this is where we find out about how other schools are using it and people do presentations, Krause said.

Along with talking to colleagues and other peers throughout the state, the Special Resource Center found Otter to be a better fit for them than their previous program, Sonocent Audio Notetaker.

As a person with a disability, I enjoyed having the visual representation and stuff that was there, Krause said.

Originally posted here:

El Camino using artificial intelligence audio recorder in classrooms to aid disabled students - El Camino College Union

Pope Francis calls for international treaty on artificial intelligence – National Catholic Reporter

Pope Francis on Dec. 14 called for a binding international treaty to regulate the development and use of artificial intelligence, saying that while new advancements could result in groundbreaking progress, they could also lead to a "technological dictatorship."

"The goal of regulation, naturally, should not only be the prevention of harmful practices but also the encouragement of best practices, by stimulating new and creative approaches and encouraging individual or group initiatives," said Francis.

The pope's request came in his message for the World Day of Peace, which is celebrated by the Catholic Church each year on Jan. 1. Each year the pope sends the document to heads of state and other global leaders along with his New Year's wishes. In addition, the pope typically gives an autographed copy of the document to high-profile Vatican visitors.

"Any number of urgent questions need to be asked. What will be the consequences, in the medium and long term, of these new digital technologies?" Francis asked in his six-page document on artificial intelligence. "And what impact will they have on individual lives and on societies, on international stability and world peace?"

The release of the pope's message comes just days after what was hailed as a landmark agreement within the European Union that provides the first global framework for artificial intelligence regulation.

At the same time, in the United States, a bipartisan group of lawmakers has been formed to consider what artificial intelligence guardrails might be necessary, though there is no clear timeframe for when such legislation may be considered. And in recent months, big tech entrepreneurs in Silicon Valley have been embroiled in a series of controversies over the future of artificial intelligence and what, if any, limits should be imposed on their own industry.

Similar to Laudate Deum, the pope's October 2023 apostolic exhortation on climate change, Francis uses his World Day of Peace message to issue a clarion call for a greater commitment to multilateral action to better regulate emerging technologies.

"The global scale of artificial intelligence makes it clear that, alongside the responsibility of sovereign states to regulate its use internally, international organizations can play a decisive role in reaching multilateral agreements and coordinating their application and enforcement," he writes.

While the document acknowledges that artificial intelligence could yield tremendous benefits for human development among them innovations in agriculture, education and improving social connections the pope offers a stern warning that it could "pose a risk to our survival and endanger our common home."

At a time when artificial intelligence is being used to execute the ongoing war in Gaza and is widely utilized in other armed conflicts, the pope sounds the alarm that the use of such technology could not only fuel more war and the weapons trade, but make peace further unattainable.

"The ability to conduct military operations through remote control systems has led to a distancing from the immense tragedy of war and a lessened perception of the devastation caused by those weapon systems and the burden of responsibility for their use," he writes.

"Autonomous weapon systems can never be morally responsible subjects," he continues. "It is imperative to ensure adequate, meaningful and consistent human oversight of weapon systems. Only human beings are truly capable of seeing and judging the ethical impact of their actions, as well as assessing their consequent responsibilities."

Among the other admonitions Francis offers is an overreliance on technology for language-processing tools, surveillance and security. Such products and innovations, he warns, raise serious questions about privacy, bias, "fake news" and other forms of technological manipulation.

At a Dec. 14 Vatican press conference, Jesuit Cardinal Michael Czerny a close collaborator of Francis said that the pope is "no Luddite" and celebrates genuine scientific and technological progress. But he warned that artificial intelligence is a high-stakes gamble and that such digital technologiesrely on the individual and social values of their creators.

"We should not liken techno-scientific progress to a 'neutral' tool such as a hammer: whether a hammer contributes to good or evil depends upon the intentions of the user, not of the hammer-maker," said Czerny, who heads the Vatican's Dicastery for Promoting Integral Human Development.

Barbara Caputo, who teaches at the Polytechnic University of Turin and directs the university's Hub on Artificial Intelligence, called for greater technical training on artificialintelligence that is inclusiveof men and women from all over the world, rather than select elites.

"Artificial intelligence will be true progress for humanity only if its technical knowledge in-depth will cease to be the domain of the few," she said. "The Holy Father reminds us that the measure of our true humanity is how we treat our most disadvantaged sisters and brothers."

In summary, writes the pope in the new document, "artificial intelligence ought to serve our best human potential and our highest aspirations, not compete with them."

"Technological developments that do not lead to an improvement in the quality of life of all humanity, but on the contrary aggravate inequalities and conflicts, can never count as true progress," Francis warns.

Read the original:

Pope Francis calls for international treaty on artificial intelligence - National Catholic Reporter