Researchers Identify the Target of Immune Attacks on Liver Cells in Metabolic Disorders – Weill Cornell Medicine Newsroom

When fat accumulates in the liver, the immune system may assault the organ. A new study from Weill Cornell Medicine researchers identifies the molecule that trips these defenses, a discovery that helps to explain the dynamics underlying liver damage that can accompany type 2 diabetes and obesity.

In a study published Aug. 19 in Science Immunology, researchers mimicked these human metabolic diseases by genetically altering mice or feeding them a high-fat, high-sugar diet. They then examined changes within the arm of the rodents immune system that mounts defenses tailored to specific threats. When misdirected back on the body, this immune response, which involves B and T cells, damages the organs and tissues it is meant to protect.

For the longest time, people have been wondering how T and B cells learn to attack liver cells, which are under increased metabolic stress due to a high fat high sugar diet, said lead investigator Dr. Laura Santambrogio, who is a professor of radiation oncology and of physiology and biophysics, and associate director for precision immunology at the Englander Institute for Precision Medicine at Weill Cornell Medicine. We have identified one protein probably the first of many that is produced by stressed liver cells and then recognized by both B and T cells as a target.

Back row from left to right: Madhur Shetty; Marcus DaSilva Goncalves; Laura Santambrogio; Lorenzo Galluzzi; Aitziber Buqu. Front row from left to right: Jaspreet Osan; Shakti Ramsamooj; Cristina Clement; Takahiro Yamazaki

The activation of the immune system further aggravates the damage already occurring within this organ in people who have these metabolic conditions, she said.

In type 2 diabetes or obesity, the liver stores an excessive amount of fat, which can stress cells, leading to a condition known as nonalcoholic steatohepatitis, commonly called fatty liver disease. The stress leads to inflammation, a nonspecific immune response that, while meant to protect, can harm tissue over time. Researchers now also have evidence that B and T cells activity contributes, too.

B cells produce proteins called antibodies that neutralize an invader by latching onto a specific part of it. Likewise, T cells destroy infected cells after recognizing partial sequences of a target protein. Sometimes, as happens in autoimmune diseases, these cells turn on the body by recognizing self proteins.

Dr. Santambrogio and her colleagues, including Dr. Lorenzo Galluzzi, assistant professor of cell biology in radiation oncology at Weill Cornell Medicine and Dr. Marcus Goncalves, assistant professor of medicine at Weill Cornell Medicine and an endocrinologist at NewYork-Presbyterian/Weill Cornell Medical Center, as well as researchers from Dr. Lawrence Sterns group at the University of Massachusetts Medical School, wanted to know what molecule within liver cells became their target.

Examining the activity of another type of immune cell, called dendritic cells, led them to a protein, called PDIA3, that they found activates both B and T cells. When under stress, cells make more PDIA3, which travels to their surfaces, where it becomes easier for the immune system to attack.

While these experiments were done in mice, a similar dynamic appears to be at play in humans. The researchers found elevated levels of antibodies for PDIA3 antibodies in blood samples from people with type 2 diabetes, as well as in autoimmune conditions affecting the liver and its bile ducts.

Unlike in autoimmune conditions, however, improving ones diet and losing weight can reverse this liver condition. The connection with diet and a decrease in fatty liver disease was already well established, Dr. Santambrogio said.

We have added a new piece to the puzzle, she said, by showing how the immune system starts to attack the liver.

Many Weill Cornell Medicine physicians and scientists maintain relationships and collaborate with external organizations to foster scientific innovation and provide expert guidance. The institution makes these disclosurespublic to ensure transparency. For this information, see profiles for Dr. Lorenzo Galluzzi and Dr. Marcus Goncalves.

See more here:
Researchers Identify the Target of Immune Attacks on Liver Cells in Metabolic Disorders - Weill Cornell Medicine Newsroom

Indiana’s new abortion ban may drive some young OB-GYNs to leave a state where they’re needed – Salon

On a Monday morning, a group of obstetrics and gynecology residents, dressed in blue scrubs and white coats, gathered in an auditorium at Indiana University School of Medicine. After the usual updates and announcements, Dr. Nicole Scott, the residency program director, addressed the elephant in the room. "Any more abortion care questions?" she asked the trainees.

After a few moments of silence, one resident asked: "How's Dr. Bernard doing?"

"Bernard is actually in really good spirits I mean, relatively," Scott answered. "She has 24/7 security, has her own lawyer."

They were talking about Dr. Caitlin Bernard, an Indiana OB-GYN who provides abortions and trains residents at the university hospital. Bernard was recently caught in a political whirlwind after she spoke about an abortion she provided to a 10-year-old rape victim from Ohio. Bernard was the target of false accusations made on national television by pundits and political leaders, including Indiana's attorney general.

The doctors interviewed for this article said that they are not speaking on behalf of their school of medicine but rather about their personal experiences during a tumultuous moment that they worry will affect the way they care for their patients.

The vitriol directed at Bernard hit home for this group of residents. She has mentored most of them for years. Many of the young doctors were certain they wanted to practice in Indiana after their training. But lately, some have been ambivalent about that prospect.

Dr. Beatrice Soderholm, a fourth-year OB-GYN resident, said watching what Bernard went through was "scary." "I think that was part of the point for those who were putting her through that," Soderholm said. They were trying "to scare other people out of doing the work that she does."

In early August, Gov. Eric Holcomb, a Republican, signed a near-total abortion ban into law, making Indiana the first state to adopt new restrictions on abortion access since the Supreme Court struck down Roe v. Wade in June. When the ban takes effect Sept. 15, medical providers who violate the law risk losing their licenses or serving up to six years in prison.

These days, Scott, the residency program director, uses some meeting time with residents to fill them in on political updates and available mental health services. She also reminds them that legal counsel is on call round-the-clock to help if they're ever unsure about the care they should provide a patient.

"Our residents are devastated," Scott said, holding back tears. "They signed up to provide comprehensive health care to women, and they are being told that they can't do that."

She expects this will "deeply impact" how Indiana hospitals recruit and retain medical professionals.

A 2018 report from the March of Dimes found that 27% of Indiana counties are considered maternity care deserts, with no or limited access to maternal care. The state has one of the nation's highest maternal mortality rates.

Scott said new laws restricting abortion will only worsen those statistics.

Scott shared results from a recent survey of nearly 1,400 residents and fellows across all specialties at the IU School of Medicine, nearly 80% of the trainees said they were less likely to stay and practice in Indiana after the abortion ban.

Dr. Wendy Tian, a third-year resident, said she is worried about her safety. Tian grew up and went to medical school in Chicago and chose to do her residency in Indiana because the program has a strong family-planning focus. She was open to practicing in Indiana when she completed her training.

But that's changed.

"I, for sure, don't know if I would be able to stay in Indiana postgraduation with what's going on," Tian said.

Still, she feels guilty for "giving up" on Indiana's most vulnerable patients.

Even before Roe fell, Tian said, the climate in Indiana could be hostile and frustrating for OB-GYNs. Indiana, like other states with abortion restrictions, allows nearly all health care providers to opt out of providing care to patients having an abortion.

"We encounter other people who we work with on a daily basis who are opposed to what we do," Tian said. Tian said she and her colleagues have had to cancel scheduled procedures because the nurses on call were not comfortable assisting during an abortion.

Scott said the OB-GYN program at the IU School of Medicine has provided residents with comprehensive training, including on abortion care and family planning. Since miscarriages are managed the same way as first-trimester abortions, she said, the training gives residents lots of hands-on experience. "What termination procedures allow you to do is that kind of repetition and that understanding of the female anatomy and how to manage complications that may happen with miscarriages," she said.

The ban on abortions dramatically reduces the hands-on opportunities for OB-GYN residents, and that's a huge concern, she said.

The program is exploring ways to offer training. One option is to send residents to learn in states without abortion restrictions, but Scott said that would be a logistical nightmare. "This is not as simple as just showing up to an office and saying, 'Can I observe?' This includes getting a medical license for out-of-state trainees. This includes funding for travel and lodging," Scott said. "It adds a lot to what we already do to educate future OB-GYNs."

Four in 10 of all OB-GYN residents in the U.S. are in states where abortion is banned or likely to be banned, so there could be a surge of residents looking to go out of state to make up for lost training opportunities. The Accreditation Council for Graduate Medical Education, the body that accredits residency programs, proposed modifications to the graduation requirements for OB-GYN residents to account for the changing landscape.

For some of the Indiana OB-GYN residents including Dr. Veronica Santana, a first-year resident these political hurdles are a challenge they're more than willing to take on. Santana is Latina, grew up in Seattle, and has been involved in community organizing since she was a teenager. One reason she chose obstetrics and gynecology was because of how the field intersects with social justice. "It's political. It always has been, and it continues to be," she said, "And, obviously, especially now."

After Roe was overturned, Santana, alongside other residents and mentors, took to the streets of Indianapolis to participate in rallies in support of abortion rights.

Indiana could be the perfect battleground for Santana's advocacy and social activism. But lately, she said, she is "very unsure" whether staying in Indiana to practice after residency makes sense, since she wants to provide the entire range of OB-GYN services.

Soderholm, who grew up in Minnesota, has felt a strong connection to patients at the county hospital in Indianapolis. She had been certain she wanted to practice in Indiana. But her family in Minnesota where abortion remains largely protected has recently questioned why she would stay in a state with such a hostile climate for OB-GYNs. "There's been a lot of hesitation," she said. But the patients make leaving difficult. "Sorry," she said, starting to cry.

It's for those patients that Soderholm decided she'll likely stay. Other young doctors may make a different decision.

This story is part of a partnership that includesSide Effects Public Media,NPR,and KHN.

KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation.

This story can be republished for free (details).

Subscribe to KHN's free Morning Briefing.

Go here to see the original:
Indiana's new abortion ban may drive some young OB-GYNs to leave a state where they're needed - Salon

Back to School 2022 | University Of Cincinnati – University of Cincinnati

For Preet Khimasia, a third-year student in finance and business analytics, co-op was a major reason for choosing UC. Born in India, Khimasia knew he wanted to study abroad, but it was important to have a recognized leader in his chosen field.

UC had one of the best programs for what I wanted to do, he says. What really attracted me is I am a very hands-on learner and a very experiential learner. I cant just sit in a classroom and expect to study everything. I need to actually get out and do in order to be successful.

Cooperative education began at UC in 1906 and its program has remained a leader in experience-based learning ever since. The university ranks No. 4 in the nation for co-op, with Cincinnatis hands-on classroom extending to nearly every corner of the globe, from Fortune 500 companies to trailblazing experiences in places like China, Tanzania and South America.

UC students earn a collective $75 million annually working for thousands of employers including General Electric Aviation, Disney, Toyota, Kroger, Procter & Gamble and many more. UC has nearly 2,000 global partners for the co-op program with students participating in over 7,500 co-op opportunities each year.

Forbes recently noted the universitys leading position and longevity in cooperative education.

See the article here:
Back to School 2022 | University Of Cincinnati - University of Cincinnati

Top Breast Reconstruction Expert to Chair UVA’s Department of Plastic and Maxillofacial Surgery – UVA Health Newsroom

Scott T. Hollenbeck, MD, FACS, has been named chair of UVA's Department of Plastic and Maxillofacial Surgery.

The University of Virginia School of Medicine has recruited internationally recognized plastic surgeon Scott T. Hollenbeck, MD, FACS, to lead its Department of Plastic and Maxillofacial Surgery. He succeeds Stephen Park, MD, FACS, who has served as the interim chair of the Department of Plastic and Maxillofacial Surgery since May 2020.

Dr. Hollenbecks talents as a surgeon, researcher and educator have made him a national leader in academic medicine, said Melina R. Kibbe, MD, the dean of the School of Medicine and chief health affairs officer for UVA Health. He is ideally suited to lead our excellent Department of Plastic and Maxillofacial Surgery to even greater accomplishments in service to our patients and future generations of physicians and scientists.

Hollenbeckcomes to UVA from Duke University/Duke Health, having served as Vice Chief of Research for the Division of Plastic, Maxillofacial, and Oral Surgery, director of The Human Fresh Tissue Lab, director of Breast Reconstruction, and director of the world-renowned Duke Flap Course,which teaches reconstructive surgery techniques to plastic surgeons from all around the world.

A specialist in breast reconstruction following cancer treatment, Hollenbeck holds several leadership positions in plastic surgery, including Vice President of Education for the American Society of Plastic Surgeons. The co-author of more than 100 peer-reviewed research publications, he focuses on the effect of obesity and tissue inflammation on breast cancer progression. He also holds several patents and has helped launch a biotechnology startup company. His research has been funded by the National Institutes of Health, Coulter Foundation, Plastic Surgery Education Foundation and the Southeastern Society of Plastic and Reconstructive Surgeons, among others.

In addition to his patient care and research, Hollenbeck has worked to address healthcare disparities in the Durham, N.C., area by performing community-based studies to identify barriers to care. During his residencies and academic career, he has received several teaching awards for his work in medical student education.

Dr. Hollenbeck has an extraordinary reputation for his commitment to advancing medicine and improving patient care, said K. Craig Kent, MD, chief executive officer of UVA Health and executive vice president for health affairs at UVA. I look forward to seeing what he will accomplish in collaboration with our excellent faculty in the Department of Plastic and Maxillofacial Surgery.

Executive Vice President and Provost Ian Baucom noted that Dr. Hollenbecks appointment will be the latest step in cementing UVAs School of Medicine as one of the nations leading public medical schools.

Hollenbeck earned his medical degree from The Ohio State University, then completed his residency in surgery at New York-Presbyterian - Cornell and a research fellowship in wound healing and vascular biology at Weill Cornell Medical College. He subsequently completed a residency in plastic surgery at Duke University Medical Center.

Hollenbeck said he was attracted to UVA by its influential tradition in plastic surgery pioneered under the leadership of Milton Edgerton, MD; Raymond Morgan, MD; and many others.Just look around the country and count all the highly regarded plastic surgeons who have trained at UVA. Its really impressive and comparable to any program I can think of, he said. This program has amazing history and is well positioned to attract and develop the next generation of impactful plastic surgeons. With the engaged and dynamic leadership of Dr. Melina Kibbe, Dr. Craig Kent, and Wendy Horton, the sky is the limit for UVA Plastic Surgery.

Hollenbeck will join UVA on November 28, 2022.

More:
Top Breast Reconstruction Expert to Chair UVA's Department of Plastic and Maxillofacial Surgery - UVA Health Newsroom

Nike Dresses The Giannis Immortality 2 in Lapis and Laser Blue – Sneaker News

As the Summer months come to a close the Leagues best will be meticulously working through the last of their training regimens before camp starts in September. Some guys opt for Pro-Am leagues to get into game shape while Giannis Antetokounmpo has been staying sharp with his native Greek national team. Amidst an emergence of warm weather styles from the Zoom Freek 4 during Eurobasket contests,The Swoosh is keeping the kids in the fold with this Lapis flavored Giannis Immortality 2.

Dressed in a shade eerily reminiscent of Minecrafts lapis lazuli ore, the darkened tone radiates with Laser Blue tooling on the sock-liner and exterior heel counter, coupled with hot red hints in the lace structure that matches the medial side Swoosh. A faint yellow hue emanates from the primary tongue logos while mixing with the aforementioned reds for a marbled outsole. And in a playful conclusion to the kids exclusive rendition of the Immortality 2 is a multi-color Swoosh.

For an up-close look at the Lapis Giannis Immortality 2, check out the images below while we wait on any further details.

In other Giannis Hoops news, have you seen the Yellow Strike Giannis Immortality 2?

Where to Buy

Make sure to follow @kicksfinder for live tweets during the release date.

Grade School: $85Style Code: DQ1943-400

See the article here:
Nike Dresses The Giannis Immortality 2 in Lapis and Laser Blue - Sneaker News

Djokovic: Sticking to Principle at Cost of Immortality – THISDAY Newspapers

In two weeks time, the U.S. Open will begin in New York City. Novak Djokovic would be the presumed favorite, because as of now he wont be there. His unvaccinated status bars him from entering the United States.

This is not a column caping for Djokovic to be allowed to play, nor is it a condemnation of him for remaining unvaxxed. Were all in our corners on this one; nothing said here is going to change anyones mind. That conversation is mostly pointless.

But whether you agree or disagree with Djokovics stance, there is this: hes remained firm in his conviction no matter the cost, and that cost is potentially monumental maybe even literally.

Djokovic currently sits at 21 major championship victories, one behind Rafael Nadal for most all time amongst men. Because of his vax status, Djokovic already missed the Australian Open earlier this year. He would have been the heavy favorite to win there, but without him, Nadal claimed the title and the lead in the race for most majors won.

Assuming nothing changes in the U.S.s stance on not allowing unvaccinated foreigners into the country, Djokovic will miss another major hed otherwise be favored to win.

The Big Three of Roger Federer (20 major titles), Nadal (22) and Djokovic (21) have won 63 of the last 77 majors, but their domination is coming to an end. Federer, at 41, has pretty much bowed out of the major race, leaving it to Nadal and Djokovic.

At 36, Nadal has been battling injuries, though even at 75 percent he may still have another French Open title or two left in him. He very well could reach 24 or 25.

Djokovic, at 35, has yet to show much sign of slowing, but the fall can happen fast in tennis. Does he have three or four more majors in him a Hall of Fame career for any tennis player to potentially keep up with Nadal?

This is whats at stake here for Djokovic immortality. The winningest mens player of all-time or maybe not. This is no small thing, he knows it, and yet hes stood strong in his conviction. And it must be noted that that conviction is not to be a hero for the unvaxxed, but rather a personal decision based on what he believes is best for him.

Some of you will cheer him for it, others will ridicule him, and eventually history (as it always does) will have its say. And if Djokovic is No. 2 on the list, what a wild discussion that will be.Culled fromYahoo.com

See original here:
Djokovic: Sticking to Principle at Cost of Immortality - THISDAY Newspapers

The agony of the nearly perfect game: Tampa Bay pitcher flirts with immortality – Yahoo Sports

On a quiet August Sunday afternoon, in front of a more-than-half-empty stadium, throwing for one unremarkable team against another, Drew Rasmussen nearly achieved history.

Unless you're a Tampa Bay Rays fan, or a really devoted Milwaukee Brewers fan, you probably have no idea who Drew Rasmussen is. Drafted in 2018, he made his major-league debut in 2020 for Milwaukee and got dealt to Tampa Bay early last season. He threw well enough to earn a spot in the Rays' rotation, but never even reaching the eighth inning of a start prior to Sunday. And then he nearly pulled off something that no MLB pitcher not Max Scherzer, not Justin Verlander, not Shohei Ohtani or Clayton Kershaw, nobody has done in 10 years.

Rasmussen took a perfect game into the ninth inning. In a game of positioning with the final AL wild-card spot at stake, Rasmussen faced 24 Orioles through eight innings, and set down all 24 of them, one after the other. He was dealing; he'd thrown just 79 pitches, and reached ball three on only two batters, both in the second inning.

And then, in one of those wrenching moments that make sports so glorious and so heartbreaking, he served up his first pitch of the ninth inning, an 86-mph cutter, only to watch Baltimore's Jorge Mateo rip it down the left-field line. Just like that, the perfect game was gone. Rasmussen would go on to get the win, and the Rays now have a crucial series win against Baltimore in the wild-card battle, but the chance for immortality vanished.

I mean, Ill take it, Rasmussen said after the game, per the Tampa Bay Times. Eight perfect [innings]. It helps our teams chance of winning. I wouldnt say it was disappointing. I came that close. Very few can say theyve done that.

He's right. There have been nearly 235,000 baseball games played at the major-league level since 1876 ... and only 23 perfect games. They arrive out of nowhere, and rarely from the pitchers you'd expect. Greg Maddux never threw one. Nor did Roger Clemens, or Tom Seaver, or Nolan Ryan. You know who has? Guys like Philip Humber and Tom Browning and Len Barker, pitchers who had everything working in perfect harmony for nine magical innings for one single day of their careers.

Story continues

No one in the majors has thrown a perfect game since Aug. 15, 2012, when Flix Hernndez did it for the Mariners. Rasmussen might never get this close to perfection again but then again, given the fact that he threw a perfect game in college for Oregon State, you never know.

What's maddening is how many perfect games seem to end, like Rasmussen's, in the ninth inning, with glory in sight but not yet achieved. Last year, the White Sox' Carlos Rodn lost perfection in the ninth when he hit a Cleveland batter in the foot. In 2015, Scherzer, then with Washington, plunked a batter with two outs in the ninth. Texas' Yu Darvish set down 26 batters before the 27th rolled a hit right back up through the middle. And in one of the worst officiating atrocities in sports, Detroit's Armando Galarraga lost a perfect game when an umpire incorrectly ruled a runner safe at first with two men out in the ninth. In all, 13 would-be perfect games have been spoiled by the 27th batter.

Baseball is the worst of all sports at getting hung up on its own mythology, the whole "Field of Dreams" treacle and nostalgia for bygone good ol' days always threatening to outshine the players and games of today. But in moments like this, when someone rises up to touch immortality, there really is something to all that "Baseball Is Life" jazz.

On any given day, you might achieve perfection. And if you don't, you get up the next day and start throwing again. Because you never know what might happen then, either.

This story was adapted from Read & React, Yahoo Sports' morning newsletter. To subscribe and get the newsletter delivered free to your inbox every morning, click here.

ST PETERSBURG, FLORIDA - AUGUST 14: Drew Rasmussen #57 of the Tampa Bay Rays reacts during the ninth inning against the Baltimore Orioles at Tropicana Field on August 14, 2022 in St Petersburg, Florida. (Photo by Douglas P. DeFelice/Getty Images)

_____

Contact Jay Busbee at jay.busbee@yahoo.com or on Twitter at @jaybusbee.

Continued here:
The agony of the nearly perfect game: Tampa Bay pitcher flirts with immortality - Yahoo Sports

The messages that survived civilisation’s collapse – BBC

That literary heritage ranged high and low, and included hymns and omens, but also, very old drinking songs. As in the Maya world, the link between writing and power was advertised through monumental inscriptions. Nabu-kusurshu's tablets were sustained and protected by an entire culture.

But there was, perhaps, also an element of individual choice. Nabu-kusurshu appears to have taken pride in his writing, and taken care to perfect it, given how exceptionally neat it was.

Crisostomo is scouring the world's museums for more of Nabu-kusurshu's tablets, of which about 24 have been found. He has studied every detail of the brewer's handwriting, from how he shaped his signs to how he spaced his lines. "It's things like that where you start to really feel like you know these people."

Despite his own love for written language, Crisostomo says his message for the future would probably be an image so that "it could transcend the need for language", and avoid the pitfalls of decipherment.

It appears, then, that a good rule of thumb is to make your message to the future either gargantuan enough that it can't be ignored, or so small that it slips through history almost unnoticed, perhaps protected by its low profile. A visual or contextual cue seems to help, be it by adding a picture, or placing it somewhere relevant to its meaning like a temple or monument. And the scholars appeared to find it obvious that it was better to use an existing language, than try to make up an artificial, future-proof one. After all, real languages have cultures to love and support them, providing future decipherers with a wealth of clues and meaning.

In fact, cuneiform is experiencing a renaissance these days, as a young generation of Iraqis learn and experiment with it. A similar spirit is infusing the Maya hieroglyphs with new life. Native Maya speakers use it to make art, and put up new stelae to commemorate important events.

That human connection and fellowship, across vast stretches of time, perhaps forms the final step for an immortal message. As much effort as we may put into it, we can only trust that at the other end of the line, there'll be another person hearing our faint voice, and caring enough to listen.

Crisostomo often remembers this when he works on ancient tablets, some marked by thumb-prints of long-dead scribes. "Sometimes you'll sit there and you put your thumb right in that same space, and you think, 'OK, maybe this person was holding this tablet just like this, 4,000 years ago, and they're holding it and they're writing it, and I'm sitting here, reading what they wrote.'"

* Sophie Hardach is the author of Languages Are Good For Us, a book about strange and wonderful ways in which humans have used languages throughout history.

--

Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called "The Essential List" a handpicked selection of stories from BBC Future, Culture, Worklife, Travel and Reel delivered to your inbox every Friday.

Go here to read the rest:
The messages that survived civilisation's collapse - BBC

Need to drain vish out of our minds to make the body politic Amrit – The New Indian Express

In the Hindu myth of samudra manthan, when the ocean is churned, it expels both amrit (nectar that promises immortality) and vish (legal poison that holds the threat to destroy creation). Without getting distracted by details of how Shiva came to the rescue and swallowed the poison that turned his throat blue and how Jayant Indras son escaped with the pitcher of amrit denying the daityas their share to keep it exclusive for the devas, we shall do well to reflect on this metaphoric tale in the context of the grand finale of the Azadi ka Amrit Mahotsav.

It is indeed a time of great rejoicing as we celebrate 75 years of independence. India has made great advances in diverse fields since 1947. The people owe it to those who sacrificed their lives during the freedom struggle and they should be gratefully commemorated. At the same time, we can not turn a blind eye to major flaws and failures, fissures in society. This should be a time of honest stock-taking.

A Dalit boy beaten to death in Rajasthan by a teacher allegedly for touching and polluting a pot of water kept for members of upper caste should make us all1.3 billion and morehang our heads in shame. In a state ruled by the Congressa party that never tires of boasting its secular, egalitarian, progressive credothis incident is revolting to say the least. Full-page tricolour adds with the CMs photo make ones stomach churn. Of course usual noises are being madethe guilty will be brought to book, harshest punishment will be meted out, and the law will take its course.

Those in power, regardless of the party they belong tochameleon-like change of colours has made a mockery of anti-defection laws in any casehave lost all credibility. This is not the first such case of casteist prejudice turning murderous. Decades after abolition of princely privileges, a toxic feud mindset continues to wreck the concept of rule of law. Due process stretches over decades and witnesses turn hostile, political power changes hands, and the accused, in many cases, walk out free. Most, protected by powerful patrons, are released on bail to live their normal barbaric lives. The brutalised, mostly poor and Dalit/tribals, are scarred for life, even if they survive.

At times, the Supreme Court intervenes and justice appears to be donetransparently. Another recent incident, though it didnt result thankfully in loss of life, should make us think hard about the price we pay for constantly celebrating on command. A foul-mouthed goon flaunting his affiliation with the ruling party in Uttar Pradesh vulgarly abused a womana fellow resident in an upmarket residential society, pushed her outraging her modesty, and was caught on camera doing all this. Then he fled and remained incognito when the police under pressure registered cases against him. Now, the community he belongs to has risen almost in revolt against political vendetta.

The honour of the entire community has been sullied. The local MP is being branded a villain. To cut a long story short, how long are we going to suffer this assault on rule of law? Why should the mobs of musclemen useful to their political masters be allowed to interfere with police investigation or court proceedings? CM Adityanath has become famous for the Yogi model to tame outlaws. The bulldozers he dispatches to demolish the dens of goons, who dare his might, are no less dreaded than ED, CBI and other Central agencies raids. Whether this man was a small-time neta or a big-time local racketeer is besides the question. Prima facie, the evidence against him is overwhelming. When the bulldozers razed his unauthorised construction, suddenly his supporters were reminded of due process not being followed. Equality before law of bulldozer wasnt an acceptable concept.

Fetters of caste and community and sectarian hate will not disappear as long as unreformed electoral politics holds sway. Add corruption (again tolerated and protected by vested interests ofa particular caste, community or religion) and we have a most explosive brew. The murderous attack on Salman Rushdie reminds us that passage of time is no guarantee that wounds will heal on their own. Blasphemy and freedom of speech are no less life-threatening in India today than in New York or Paris.

The PM has a gift for inspirational rhetoric. As the curtains came down on the Amrit Mahotsav, he shared his vision for coming decades as a run-up to an even more glorious centenary celebration of independence. The future appeared rainbow-hued even on a day overcast with dark clouds. Many of us will not be around in 2047. The aspirational young today would have become middle-agedmany with their dreams unfulfilled. It will be very sad if we havent drained the vish out of our minds to heal the body politic.

PushpeshPant

Former professor,Jawaharlal Nehru University

pushpeshpant@gmail.com

Read more here:
Need to drain vish out of our minds to make the body politic Amrit - The New Indian Express

Weatherford Art Association names Artists of the Month – Weatherford Democrat

Each month local artists compete in an Artist of the Month competition for Weatherford Art Association. Competition occurs at the monthly meetings held on the fourth Monday of the month at 6 p.m., 125 S. Waco St. in Weatherford. Artists show their work in oil, watercolor, pastel, mixed media, acrylic and other mediums.

Winners at the last meeting at the end of July were Marti Bailey with her first place oil on canvas Violet Waters. This painting, along with other pieces of her work, can be found at the Doss Cultural & Heritage Center all month.

Second place went once again to Vikki Linderman for her acrylic on canvas titled Days End. Vikkis work is displayed at the First Bank Texas in downtown Weatherford.

Kathy Cunning, long time Parker County resident, won third place with her acrylic Reblooming Immortality. You can see her work at the Community Credit Union.

Weatherford Art Association has interesting demos each month that give techniques and skills to artists who are accomplished and those who are just beginning. In September Patsy Walton will present her impressionistic and abstract style in painting florals. In October, Jack Harkins will also show how to paint with an impressionistic style yet objects that are recognizable. Joan Frost Prine, wife of the late Doug Prine western artist, is also an upcoming guest. She is a self-taught artist in her own right who creates beautiful works of the old west. Meetings begin at 6 p.m. and are completed by 7:45 p.m.

The next show hosted by WAA is the Spirit of the West which showcases art from around the Southwest. The deadline for submission of paintings is September. For more information regarding this show, visit weatherfordart.com.

We are making critical coverage of the coronavirus available for free. Please consider subscribing so we can continue to bring you the latest news and information on this developing story.

Link:
Weatherford Art Association names Artists of the Month - Weatherford Democrat

‘The Sandman’ review: A modern myth mounted on an epic canvas – The New Indian Express

Express News Service

The Sandman is a deeply meditative and reflective piece of art. If the scope of a story depends upon the extent to which it delves into various themes and emotions, then the canvas upon which The Sandman delightfully paints is one of the grandest to be put on screen.

Almost every episode follows a theme: sacrifice, humanity, lies, death, immortality, being consumed by wishes, and being trapped in the past. The show is laced with philosophical discourse, to an almost obsessive extent, and that could be overbearing for some.

The immensity of the world of dreams and its influence on people and their lives are laid bare from the very first scene, through the eyes of the lord of dreams, Morpheus himself. Tom Sturridge, who plays Morpheus/Dream in the show, is brilliant in the way he uses his voice and micro-reactions. His dreamy (pun intended) performance teeters right on the edge between subtlety and indifference, so much so that if he had done it even slightly wrong, it could have looked bland.

How do you make a relatable protagonist out of someone eternal, all-knowing, and lords over the destinies of all things that sleep and dream? You dont. Morpheus is introduced to us as someone who holds in high regard his responsibilities and yet fails to understand the emotions of the subjects who populate his realm. And that is where his arc begins, we see him question human emotions, their base desires, and fears throughout the series.

Towards the middle, we even see him break down under an existential crisis. This leads to an interesting episode where his sister, Death(Kirby Howell-Baptiste), visits him and helps him clear his mind through a therapeutic conversation laced with sisterly love and care. We often see Death represented as a terrifying hooded figure holding a scythe, and this decision to show Death as a warm and caring sister is a refreshing one.

We are introduced to two antagonistic figures in the beginning. One is Corinthian, a rogue nightmare and another is John Dee, the son of the man who kept Morpheus in captivity for over a century. While the Corinthian haunts the series like a recurring nightmare, punctuating his presence in the story through acts of shocking violence, it is John Dees ideological clash with the lord of dreams that becomes, well... the stuff of dreams.

John Dee (David Thewlis), is the exact opposite of everything that defines who Morpheus is. Every act of violence he metes out is borne directly out of his cold, hard, adherence to his black-and-white approach to morality. There is a brilliant bottle episode in which we see John unleash absolute carnage by sitting at the corner of a diner and using his powers to make people talk unfiltered.

Dream/Morpheus is said to have siblings, Destiny, Death, Destruction, Desire, Despair, and Delirium. We are introduced to Death, Despair, and Desire through brief moments that are laden with so much subtext. Dream relates to Death the most, and Desire and their twin Despair constantly scheme to destroy Dream.

Despite the expansive, mythical quality that pervades the show, it fails to weave a coherent narrative, but that hardly stops you from enjoying the show.Quirky characters like Merv the Pumpkinhead (Mark Hamill), Cain & Abel, and even (the mostly enjoyable) Matthew the Raven (Patton Oswalt) do not seem to affect the overarching story to a satisfactory extent. They merely exist to give us a peek into the possibilities for further seasons.

The show loses its steam halfway, both visually and storywise. The sequence set in hell is a perfect example of what went missing from the second half of the show. The jaw-dropping visuals of hell (ripped straight out of the graphic novel) and the battle of wits between Lucifer and Morpheus were truly sublime and the show could have used more of that.

Some episodes are like a dream, and much like a dream, the experience is immersive and deeply engaging while youre in it, but the moment you step away, youre left wondering what it was all about. But the episodes and the moments that do work end up making all of this worth it. Take, for instance, the episode in which Morpheus meets a young, hapless writer in an old English tavern in the 16th century. Morpheus overhears the man pining to his friend about his wish to make men dream through his writing.

A captivated Morpheus is then shown taking the man away for a chat. We are not shown what happens during the conversation until years later when it is revealed that the young writer was William Shakespeare. The show is peppered with interesting moments like these that might not necessarily work in the overall narrative, but thats okay. Not all stories need a tightly packed narrative.

With the aforementioned sequence with the bard, we are shown how the lord of dreams is also the lord of wishes and stories. In many of these moments in the series, there are layers and layers of subtexts. It is up to us to decide how deep we want to go. There are pacing issues, quirky characters that dont really amount to much, and jarring tonal shifts... but if the world of dreams and the exhaustive yet rewarding experience of ruminating on the layers of subtext and philosophy sounds engaging, then you need only summon The Sandman... for Season 2.

Series: The Sandman Season 1Streaming on: NetflixCreators: Neil Gaiman, David S Goyer, and Allan HeinbergCast: Tom Sturridge, Boyd Holbrook, David Thewlis, Kirby Howell-BaptisteRating: 4/5

Read more:
'The Sandman' review: A modern myth mounted on an epic canvas - The New Indian Express

The Goddess of learning – The New Indian Express

Saraswati is the Goddess of learning, knowledge, speech, art and music. She is worshipped in three different formsas speech, as a river, and as a goddess. Saraswati is shown as fair-complexioned, with four arms. She is seated on a swan or a white lotus and holds a veena (stringed musical instrument). In her other hands, she holds a mala (prayer beads), a book symbolising the Vedas, and a water-filled kamandala (a small vessel with a handle). She is dressed in white and blue garments, befitting her status as a river. Unlike Durga, she does not carry any arms.

Saraswati is also known as Vidyadatri (Goddess who provides knowledge), Veenavadini (Goddess who plays the veena), Pustakadharini (Goddess who carries a book), Hamsavahini (Goddess who sits on swan) and Vagdevi (Goddess of speech). In Mahabharata, Saraswati is called the mother of Vedas.

Saraswati is the only Goddess of the tridevi (the other two goddesses are Lakshmi and Parvati) who is mentioned in the Vedas. It is remarkable that she has been able to retain her importance even though most Vedic gods and goddesses declined later. As a mighty and benevolent river, Saraswati had a unique place in the Vedic civilisationthe same place that the Ganga holds for modern-day Hindus.Kingdoms, ashrams and pilgrimage sites came up all along her banks. The Aryan tribes mentioned in the Rigveda like the Purus, the Bharatas, the Nahushas, the Turvasas and the Yadus had their settlements on the banks of Saraswati. In the Vedas, she is praised as the bringer of nourishment, fertility, wealth, vitality, children and immortality. She is mentioned as the origin of all pleasant songs and pious thoughts.

In the Puranas, there is a story about how Saraswati became a river. There was a terrible battle between the Bhargavas, a group of Brahmanas, and Hehayas, a group of Kshatriyas. Out of this was created an all-consuming fire called Vadavagni. At the request of the devas, Saraswati took the form of a river and carried Vadavagni to the ocean to save the world. Some studies have indicated that the river Saraswati later dried up due to movement in the earths tectonic plates. Despite numerous attempts in modern times, her past location remains uncertain.

Saraswati Puja is held during the festival of Basant Panchami, in late January or February, which marks the start of preparations for spring. On this occasion, young children are encouraged to write their first words. Many families study or create music together. Many schools organise Saraswati Puja for their students. Saraswati is the only one from the tridevi who is worshipped more than her husband, Brahma. Being the goddess of learning, Saraswati is frequently remembered by students on exam eve.

Read the original here:
The Goddess of learning - The New Indian Express

AI and Machine Learning in Finance: How Bots are Helping the Industry – ReadWrite

Artificial intelligence and ML are making considerable inroads in finance. They are the critical aspect of variousfinancial applications, including evaluating risks, managing assets, calculating credit scores, and approving loans.

Businesses use AI and ML:

Taking the above points into account, its no wonder that companies like Forbes and Venture beat are usingAI to predict the cash flow and detect fraud.

In this article, we present the financial domain areas in which AI and ML have a more significant impact. Well also discuss why financial companies should care about and implement these technologies.

Machine learning is a branch of artificial intelligence that allows learning and improvement without any programming. Simply put, data scientists train the MI model with existing data sets and automatically adjust its parameters to improve the outcome.

According to Statista, digital payments are expected to show an annual growth rate of 12.77% and grow to 20% by 2026. This vast number of global revenues, done online requires an intelligent fraud system.

Source: Mordor Intelligence

Traditionally, to check the authenticity of users, fraud-detection systems analyze websites through factors like location, merchant ID, the amount spent, etc. However, while this method is appropriate for a few transactions, it would not cope with the increased transactional amount.

And, analyzing the surge of digital payments, businesses cant rely on traditional fraud-detection methods to process payments. This gives rise to AI-based systems with advanced features.

An AI and ML-powered payment gateway will look at various factors to evaluate the risk score. These technologies consider a large volume of data (location of the merchant, time zone, IP address, etc.) to detect unexpected anomalies, and verify the authenticity of the customer.

Additionally, the finance industry, through AI, can process transactions in real-time, allowing the payment industry to process large transactions with high accuracy and low error rates.

The financial sector, including the banks, trading, and other fintech firms, are using AI to reduce operational costs, improve productivity, enhance users experience, and improve security.

The benefits of AI and ML revolve around their ability to work with various datasets. So lets have a quick look at some other ways AI and ML are making roads into this industry:

Considering how people invest their money in automation, AI significantly impacts the payment landscape. It improves efficiency and helps businesses to rethink and reconstruct their process. For example, businesses can use AI to decrease the credit card processing (gettrx dot com card processing guide for merchants) time, increase automation and seamlessly improve cash flow.

You can predict credit, lending, security, trading, baking, and process optimization with AI and machine learning.

Human error has always been a huge problem; however, with machine learning models, you can reduce human errors compared to humans doing repetitive tasks.

Incorporating security and ease of use is a challenge that AI can help the payment industry overcome. Merchants and clients want a payment system that is easy to use and authentic.

Until now, the customers have to perform various actions to authenticate themselves to complete a transaction. However, with AI, the payment providers can smooth transactions, and customers have low risk.

AI can efficiently perform high volume; labor-intensive tasks like quickly scrapping data and formatting things. Also, AI-based businesses are focused and efficient; they have minimum operational cost and can be used in the areas like:

Creating more Value:

AI and machine learning models can generate more value for their customers. For instance:

Improved customer experience: Using bots, financial sectors like banks can eliminate the need to stand in long queues. Payment gateways can automatically reach new customers by gathering their historical data and predicting user behavior. Besides, Ai used in credit scoring helps detect fraud activity.

There are various ways in which machine learning and artificial intelligence are being employed in the finance industry. Some of them are:

Process Automation:

Process automation is one of the most common applications as the technology helps automate manual and repetitive work, thereby increasing productivity.

Moreover, AI and ML can easily access data, follow and recognize patterns and interpret the behavior of customers. This could be used for the customer support system.

Minimizing Debit and Credit Card Frauds:

Machine learning algorithms help detect transactional funds by analyzing various data points that mostly get unnoticed by humans. ML also reduces the number of false rejections and improves the real-time approvals by gauging the clients behavior on the Internet.

Apart from spotting fraudulent activity, AI-powered technology is used to identify suspicious account behavior and fraudulent activity in real-time. Today, banks already have a monitoring system trained to catch the historical payment data.

Reducing False Card Declines:

Payment transactions declined at checkout can be frustrating for customers, putting huge repercussions on banks and their reputations. Card transactions are declined when the transaction is flagged as fraud, or the payment amount crosses the limit. AI-based systems are used to identify transaction issues.

The influx of AI in the financial sector has raised new concerns about its transparency and data security. Companies must be aware of these challenges and follow safeguards measures:

One of the main challenges of AI in finance is the amount of data gathered in confidential and sensitive forms. The correct data partner will give various security options and standards and protect data with the certification and regulations.

Creating AI models in finance that provide accurate predictions is only successful if they are explained to and understood by the clients. In addition, since customers information is used to develop such models, they want to ensure that their personal information is collected, stored, and handled securely.

So, it is essential to maintain transparency and trust in the finance industry to make customers feel safe with their transactions.

Apart from simply implementing AI in the online finance industry, the industry leaders must be able to adapt to the new working models with new operations.

Financial institutions often work with substantial unorganized data sets in vertical silos. Also, connecting dozens of data pipeline components and tons of APIS on top of security to leverage a silo is not easy. So, financial institutions need to ensure that their gathered data is appropriately structured.

AI and ML are undoubtedly the future of the financial sector; the vast volume of processes, transactions, data, and interactions involved with the transaction make them ideal for various applications. By incorporating AI, the finance sector will get vast data-processing capabilities at the best prices, while the clients will enjoy the enhanced customer experience and improved security.

Of course, the power of AI can be realized within transaction banking, which sits on the organizations usage. Today, AI is very much in progress, but we can remove its challenges by using the technology. Lastly, AI will be the future of finance you must be ready to embrace its revolution.

Featured Image Credit: Photo by Anna Nekrashevich; Pexels; Thank you!

See more here:
AI and Machine Learning in Finance: How Bots are Helping the Industry - ReadWrite

Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports – Nature.com

Participants

This study was conducted as part of the ongoing Study on the Design of a Comprehensive Medical System for Chronic Kidney Disease (CKD) Based on Individual Risk Assessment by Specific Health Examination (J-SHC Study). A specific health checkup is conducted annually for all residents aged 4074years, covered by the National Health Insurance in Japan. In this study, a baseline survey was conducted in 685,889 people (42.7% males, age 4074years) who participated in specific health checkups from 2008 to 2014 in eight regions (Yamagata, Fukushima, Niigata, Ibaraki, Toyonaka, Fukuoka, Miyazaki, and Okinawa prefectures). The details of this study have been described elsewhere11. Of the 685,889 baseline participants, 169,910 were excluded from the study because baseline data on lifestyle information or blood tests were not available. In addition, 399,230 participants with a survival follow-up of fewer than 5years from the baseline survey were excluded. Therefore, 116,749 patients (42.4% men) with a known 5-year survival or mortality status were included in this study.

This study was conducted in accordance with the Declaration of Helsinki guidelines. This study was approved by the Ethics Committee of Yamagata University (Approval No. 2008103). All data were anonymized before analysis; therefore, the ethics committee of Yamagata University waived the need for informed consent from study participants.

For the validation of a predictive model, the most desirable way is a prospective study on unknown data. In this study, the data on health checkup dates were available. Therefore, we divided the total data into training and test datasets to build and test predictive models based on health checkup dates. The training dataset consisted of 85,361 participants who participated in the study in 2008. The test dataset consisted of 31,388 participants who participated in this study from 2009 to 2014. These datasets were temporally separated, and there were no overlapping participants. This method would evaluate the model in a manner similar to a prospective study and has an advantage that can demonstrate temporal generalizability. Clipping was performed for 0.01% outliers for preprocessing, and normalization was performed.

Information on 38 variables was obtained during the baseline survey of the health checkups. When there were highly correlated variables (correlation coefficient greater than 0.75), only one of these variables was included in the analysis. High correlations were found between body weight, abdominal circumference, body mass index, hemoglobin A1c (HbA1c), fasting blood sugar, and AST and alanine aminotransferase (ALT) levels. We then used body weight, HbA1c level, and AST level as explanatory variables. Finally, we used the following 34 variables to build the prediction models: age, sex, height, weight, systolic blood pressure, diastolic blood pressure, urine glucose, urine protein, urine occult blood, uric acid, triglycerides, high-density lipoprotein cholesterol (HDL-C), LDL-C, AST, -glutamyl transpeptidase (GTP), estimated glomerular filtration rate (eGFR), HbA1c, smoking, alcohol consumption, medication (for hypertension, diabetes, and dyslipidemia), history of stroke, heart disease, and renal failure, weight gain (more than 10kg since age 20), exercise (more than 30min per session, more than 2days per week), walking (more than 1h per day), walking speed, eating speed, supper 2h before bedtime, skipping breakfast, late-night snacks, and sleep status.

The values of each item in the training data set for the alive/dead groups were compared using the chi-square test, Student t-test, and MannWhitney U test, and significant differences (P<0.05) were marked with an asterisk (*) (Supplementary Tables S1 and S2).

We used two machine learning-based methods (gradient boosting decision tree [XGBoost], neural network) and one conventional method (logistic regression) to build the prediction models. All the models were built using Python 3.7. We used the XGBoost library for GBDT, TensorFlow for neural network, and Scikit-learn for logistic regression.

The data obtained in this study contained missing values. XGBoost can be trained to predict even with missing values because of its nature; however, neural network and logistic regression cannot be trained to predict with missing values. Therefore, we complemented the missing values using the k-nearest neighbor method (k=5), and the test data were complemented using an imputer trained using only the training data.

The parameters required for each model were determined for the training data using the RandomizedSearchCV class of the Scikit-learn library and repeating fivefold cross-validation 5000 times.

The performance of each prediction model was evaluated by predicting the test dataset, drawing a ROC curve, and using the AUC. In addition, the accuracy, precision, recall, F1 scores (the harmonic mean of precision and recall), and confusion matrix were calculated for each model. To assess the importance of explanatory variables for the predictive models, we used SHAP and obtained SHAP values that express the influence of each explanatory variable on the output of the model4,12. The workflow diagram of this study is shown in Fig.5.

Workflow diagram of development and performance evaluation of predictive models.

See the rest here:
Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports - Nature.com

Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder – University of Arkansas Newswire

Photo by University Relations

Khoa Luu and Han-Seok Seo

Could artificial intelligence be used to assist with the early detection of autism spectrum disorder? Thats a question researchers at the University of Arkansas are trying to answer. But theyre taking an unusual tack.

Han-Seok Seo, an associate professor with a joint appointment in food science and the UA System Division of Agriculture, and Khoa Luu, an assistant professor in computer science and computer engineering, will identify sensory cues from various foods in both neurotypical children and those known to be on the spectrum. Machine learning technology will then be used to analyze biometric data and behavioral responses to those smells and tastes as a way of detecting indicators of autism.

There are a number of behaviors associated with ASD, including difficulties with communication, social interaction or repetitive behaviors. People with ASD are also known to exhibit some abnormal eating behaviors, such as avoidance of some if not many foods, specific mealtime requirements and non-social eating. Food avoidance is particularly concerning, because it can lead to poor nutrition, including vitamin and mineral deficiencies. With that in mind, the duo intend to identify sensory cues from food items that trigger atypical perceptions or behaviors during ingestion. For instance, odors like peppermint, lemons and cloves are known to evoke stronger reactions from those with ASD than those without, possibly triggering increased levels of anger, surprise or disgust.

Seo is an expert in the areas of sensory science, behavioral neuroscience, biometric data and eating behavior. He is organizing and leading this project, including screening and identifying specific sensory cues that can differentiate autistic children from non-autistic children with respect to perception and behavior. Luu isan expert in artificial intelligence with specialties in biometric signal processing, machine learning, deep learning and computer vision. He will develop machine learning algorithms for detecting ASD in children based on unique patterns of perception and behavior in response to specific test-samples.

The duo are in the second year of a three-year, $150,000 grant from the Arkansas Biosciences Institute.

Their ultimate goalis to create an algorithm that exhibits equal or better performance in the early detection of autism in children when compared to traditional diagnostic methods, which require trained healthcare and psychological professionals doing evaluations, longer assessment durations, caregiver-submitted questionnaires and additional medical costs. Ideally, they will be able to validate a lower-cost mechanism to assist with the diagnosis of autism. While their system would not likely be the final word in a diagnosis, it could provide parents with an initial screening tool, ideally eliminating children who are not candidates for ASD while ensuring the most likely candidates pursue a more comprehensive screening process.

Seo said that he became interested in the possibility of using multi-sensory processing to evaluate ASD when two things happened: he began working with a graduate student, Asmita Singh, who had background in working with autistic students, and the birth of his daughter. Like many first-time parents, Seo paid close attention to his newborn baby, anxious that she be healthy. When he noticed she wouldnt make eye contact, he did what most nervous parents do: turned to the internet for an explanation. He learned that avoidance of eye contact was a known characteristic of ASD.

While his child did not end up having ASD, his curiosity was piqued, particularly about the role sensitivities to smell and taste play in ASD. Further conversations with Singh led him to believe fellow anxious parents might benefit from an early detection tool perhaps inexpensively alleviating concerns at the outset. Later conversations with Luu led the pair to believe that if machine learning, developed by his graduate student Xuan-Bac Nguyen, could be used to identify normal reactions to food, it could be taught to recognize atypical responses, as well.

Seo is seeking volunteers 5-14 years old to participate in the study. Both neurotypical children and children already diagnosed with ASD are needed for the study. Participants receive a $150 eGift card for participating and are encouraged to contact Seo athanseok@uark.edu.

About the University of Arkansas:As Arkansas' flagship institution, the UofA provides an internationally competitive education in more than 200 academic programs. Founded in 1871, the UofA contributes more than$2.2 billion to Arkansas economythrough the teaching of new knowledge and skills, entrepreneurship and job development, discovery through research and creative activity while also providing training for professional disciplines. The Carnegie Foundation classifies the UofA among the few U.S. colleges and universities with the highest level of research activity.U.S. News & World Reportranks the UofA among the top public universities in the nation. See how the UofA works to build a better world atArkansas Research News.

See the article here:
Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder - University of Arkansas Newswire

Identification of microstructures critically affecting material properties using machine learning framework based on metallurgists’ thinking process |…

Analysis of structure optimization problem of dual-phase materials

For demonstrating the potential of our framework for the structure optimization of multiphase materials in terms of a target property, a simple sample problem is considered. The sample problem is the structure optimization of artificial dual-phase steels composed of the soft phase (ferrite) and hard phase (martensite). Examples of microstructures are shown in Fig.3. The prepared dual-phase microstructures can be divided into four major categories: laminated microstructures, microstructures composed of rectangle- and ellipse-shaped martensite/ferrite grains, and random microstructures. The size of microstructure images is (128times 128~mathrm {pixels}) and the total number of prepared microstructures is 3824. As an example of a target material property, the fracture strain was selected since fracture behavior is strongly related to the geometry of the two phases. The fracture strain is the elongation of materials at break. As shown in Methodology, the fracture strains for each category were calculated on the basis of the GTN fracture model18,19. Figure4 illustrates the relationship between martensite volume fraction and fracture strain for each category. This shows that laminated microstructures have a relatively high fracture strain. Also, microstructures with a lower martensite volume fraction (higher ferrite volume fraction) possess a higher fracture strain.

Examples of artificial dual-phase microstructures used for training. Black and white pixels respectively correspond to the hard phase (martensite) and soft phase (ferrite). The size of microstructure images is (128times 128) pixels. The dataset can be divided into four major categories. (a) Laminated microstructures. This category only has completely laminated microstructures. (b) Microstructures composed of rectangular martensite grains. This category includes partially laminated structures, such as these shown in the lower left panel. (c) Microstructures composed of elliptical martensite grains. (d) The random microstructures.

Relationship between martensite volume fraction and fracture strain, and examples of microstructures. (a) Plot showing correspondence between martensite volume fraction and fracture strain. (b) Examples of microstructures. Their martensite volume fractions and fracture strains are shown in the plot.

Microstructures generated by the machine learning framework trained by several datasets. (a) Examples of microsturctuers generated for several fracture strains by the network trained using All dataset. (b) Each column corresponds to the microstructures obtained by the models trained using all microstructures, only the random microstructures, only the microstructures composed of ellipse-shaped martensite grains, or only the microstructures composed of rectangle-shaped martensite grains. However, the Rectangle dataset is limited to include only the microstructures whose martensite volume fraction is between 20% and 30%. The given fracture strains are 0.1, 0.3, 0.7, and 0.9 for the All, Random, and Ellipse datasets, and 0.05, 0.1, 0.2, and 0.3 for the Rectangle dataset.

To show the applicability of our framework, we prepared several datasets: all microstructures (All), only random microstructures (Random), only microstructures composed of ellipse-shaped martensite grains (Ellipse), and only microstructures composed of rectangle-shaped martensite grains (Rectangle). Then, we trained VQVAE and PixelCNN using these datasets. The Rectangle dataset is limited to include only the microstructures whose martensite volume fraction is between 20% and 30% to consider the case in which martensite grains are located separately from each other.

Figure5a shows examples of microstructures generated for several fracture strains using the network trained by All dataset. Figure5b summarizes the trend of the microstructures obtained by the networks trained using the above datasets with gradually increasing fracture strain. For the All, Random, and Ellipse datasets, we can see the trend that martensite grains become smaller and thinner as the target fracture strain increases. Since the larger area fraction of the soft phase (ferrite) contributes to the realization of higher elongation as we can see in Fig.4, this result is reasonable. In addition, it should be noted that the laminated structure corresponding to the highest fracture strain ((text {FS}=0.9)) was generated only for the All case in which the laminated structures are included in the training dataset. Additionally, in the case of the controlled martensite volume fraction of the input microstructures (Rectangle), the martensite grains tend to arrange more uniformly as the given fracture strain increases.

Generated microstructures and trend of martensite volume fraction. (a) Microstructures generated at fixed tensile strength and several fracture strains. The tensile strength is set as 700 MPa. The given FSs are 0.1, 0.3, 0.4, 0.5, 0.7, and 0.9. (b) Trend of martensite volume fraction relative to the change in fracture strain. For each fracture strain, the martensite volume fractions of 3000 microstructures generated corresponding to the fracture strain and fixed tensile strength ((700 mathrm {MPa})) were calculated. The black lines and green triangles in the boxes denote median and mean values, respectively.

From these results, we can conclude that there are at least two different strategies for the realization of a higher fracture strain: one is to decrease the size of martensite grains and also to arrange them uniformly, and the other to alternatively make a completely laminated composite structure27. The fact that the laminated structures never appear without providing the laminated structures in the training dataset implies that there exists an impenetrable wall for a simple optimization process, such as a gradient descent algorithm used to train neural networks, to figure out the robustness of laminated structures from the other structures.

Next, the tensile strength is given in addition to the fracture strain as another label for PixelCNN for considering the balance between strength and fracture strain (ductility). In this case, all microstructure data are used for training. The microstructures are generated at the fixed tensile strength of (700 mathrm {MPa}). The generated microstructures are shown in Fig.6a. The laminated structures seem to be dominantly selected as the target fracture strain increases. The trend that martensite grains become smaller and thinner is not seen when the tensile strength is fixed.

In addition, the martensite volume fractions were calculated for 3000 microstructures generated corresponding to several fracture strains. The tensile strength was fixed at (700 mathrm {MPa}) again. The box plot of the trend of the martensite volume fraction relative to the change in fracture strain is shown in Fig.6b. The martensite volume fraction decreases as the given fracture strain increases. At the same time, the martensite volume fraction approaches a constant value. For the realization of a higher ductility without decreasing the tensile strength, the shape of martensite grains approaches that of the laminated structures as the martensite volume fraction decreases. This result implies that laminated structures can achieve a higher tensile strength with a smaller martensite volume fraction. As a result, the laminated structures can be considered as the optimized structures with respect to the shape of martensite grains for the realization of a higher ductility without decreasing their strength. The laminated structures were actually reported to exhibit improved combinations of strength and ductility27.

Correspondence between the target fracture strains given as inputs and the actual fracture strains. For each target fracture strain, 30 microstructures were generated. Then, fracture strains are calculated using the physical model18,19. (a) Plot of relationship. (b) Box plot of relationship. The black lines and green triangles in the boxes denote median and mean values, respectively. (c) Microstructures whose fracture strains are smaller than 20% of the target fracture strains. The values above the panels denote the given target fracture strains (left) and actual fracture strains (right).

To validate the effectiveness of the present framework, fracture strains are calculated using the physical model18,19 for each microstructure obtained using the framework. In this case, the network trained by giving only fracture strain as the target property is used. Figure7a,b show the correspondence between the target fracture strains for generated microstructures and the actual calculated fracture strains. Also, the coefficient of determination was 0.672. It is clear that our framework captures well the general trend of microstructures relative to the fracture strain. However, it should be noted also that there exist several microstructures whose actual fracture strains are far less than the target strains. Figure7c shows the typical microstructures whose fracture strains are smaller than 20% of the target fracture strains. Additionally, the coefficient of determination for the data without data points corresponding to the microstructures shown in Fig.7c was 0.76. All of them are partially incomplete laminated structures. This can be understood as follows. Although laminated structures has a potential to realize higher fracture strains as shown in Fig.4, this is true only when the microstructures are completely laminated. Even when one martensite layer has a tiny hole, the gap between martensite grains becomes the hot spot that induces much earlier rupture. Thus, the box plot shown in Fig.7b is understood to show decreasing values as a result of an attempt to completely laminate the structures to realize the given target fracture strain. This indicates that the framework recognizes the structures shown in Fig.7c to be structurally close to completely laminated structures even though they have far less fracture strains than the completely laminated structures.

As a consequence, these results illustrate that our framework provides a powerful tool for the optimization of material microstructures in terms of target properties, or at least for capturing the trend of microstructures in terms of the change in target property in various cases.

The above results of the generation of material structures corresponding to the target fracture strain indicate that our framework captures the implicit correlation between the material microstructures and the fracture strain. However, generally, it is difficult to interpret implicit knowledge captured by machine learning methods. For that reason, we cannot hastily conclude that machine learning can understand this problem and acquire meaningful knowledge for material design similarly to humans or that it just obtains physically meaningless problem-specific knowledge. Usually, human researchers attain the background physics by noting a part or behavior that will affect a target property during numerous trial-and-error experiments. Generally, this process is time-consuming. Accordingly, approaching implicit knowledge obtained by machine learning methods could be beneficial for achieving a more efficient way to extract general knowledge for material design. Thus, we discuss how to approach the physical background behind the implicit knowledge captured by our framework. In particular, we investigate whether the machine learning framework can identify a part of material microstructures that strongly affects a target property in a similar way human experts can predict on the basis of their experiences.

To identify a critical part of microstructures, we consider calculating a derivative of material microstructures with respect to the fracture strain. This is based on the assumption that human experts unconsciously consider the sensitivity of material microstructures to a slight change in target property. Accordingly, the following variable (Delta) is defined as the derivative:

$$begin{aligned} Delta :=frac{partial D(mathbb {E}_{P(theta |epsilon _f, M_r)}[ theta ])}{partial epsilon _f}, end{aligned}$$

(3)

where (mathbb {E}_{P(theta |epsilon _f, M_r)}[ theta ]) is the expectation of a spatial arrangement of fundamental structures (theta) according to (P(theta |epsilon _f, M_r)), which is the probability distribution captured by PixelCNN. Here, (M_r) and (epsilon _f) are the reference microstructure under consideration and the calculated fracture strain for the microstructure, respectively. In other words, (mathbb {E}_{P(theta |epsilon _f, M_r)}[ theta ]) is the deterministic function of (epsilon _f) and (M_r). In addition, D is the CNN-based deterministic decoder function; hence, (Delta) has the same pixel size of the input microstructure images.

If the machine learning framework correctly captures the physical correlation between the geometry of the material microstructures and the fracture strain, (Delta) is expected to correspond to the areas in (M_r) that highly affects the determination of the fracture strain even without giving the physical mechanism itself. For numerical calculation, (Delta) is approximated as

$$begin{aligned} Delta thickapprox {D(mathbb {E}_{P(theta |epsilon _f+Delta epsilon _f, M_r)}[ theta ])-D(mathbb {E}_{P(theta |epsilon _f, M_r)}[ theta ])}/Delta epsilon _f, end{aligned}$$

(4)

where (Delta epsilon _f) is the gap of the fracture strain, which is set as 0.01 in this paper. Because it is difficult to compare quantitatively the distribution of this variable with the critical microstructure distributions obtained from the physical model, in this paper, we only discuss the location of crucial parts. Thus, the denominator (Delta epsilon _f) is ignored for the calculation of (Delta) in the rest of this paper.

Comparison of derivatives of microstructures with respect to the fracture strain obtained using the machine learning framework with the distributions of void volume fractions calculated on the baisis of physical model. (a)(d) Comparisons for several microstructures. The left, middle, and right column correspond to the reference microstructures, the void distributions obtained using the physical model, and the derivative obtained by the machine learning framework, respectively.

Figure8 shows the comparison of the parts of microstructures critically affecting the determination of the fracture strain obtained by the physical model and our machine learning framework. In the case of the results from machine learning, the absolute values of (Delta) defined in Eq.(3) for each pixel are shown as colormaps. On the other hand, because the fracture behavior is formulated as damage and void-growth processes in the physical model, the void distribution in a critical state directly shows the critical points for the determination of fracture strain. Thus, in the case of the physical model, the calculated void distribution in a critical state is shown in Fig.8. The details of the physical model and the experiment for the determination of some parameters are given in Methodology. For ease of comparison, the ranges of visualized values are changed for each image, while the relative relationships among values of each colormap are kept. Thus, we compare the results qualitatively in terms of the distribution of areas having relatively high values in the next paragraph.

Figure8a,b illustrate the crucial parts of the microstructures composed of relatively long and narrow rectangle-shaped martensite grains. We can see an acceptable agreement between the results of the physical and machine learning methods in terms of the overall distribution of crucial areas which are shown in red in the colormaps of Fig.8. In addition, Fig.8c,d show the parts that critically influence the fracture behavior in the microstructures composed of similarly shaped martensite grains. As an important difference between them, in Fig.8c, the rectangle-shaped martensite grains are irregularly arranged and some martensite grains are close to each other, which might critically affect the fracture behavior, whereas in Fig.8d, circular martensite grains are almost regularly arranged. About Fig.8c, the machine learning framework seems to capture the crucial parts that are predicted by the physical model. As mentioned above, the distributions seem to be dominantly affected by the martensite grains being close to each other. In other words, the short-range interactions among a small number of martensite grains are dominant for the determination of the fracture strain in this case. Also, in Fig.8d, both the physical model and the machine learning framework can predict that the crucial parts are uniformly distributed in square areas.

On the other hand, the physical model also predicts the influence of long-range interactions among martensite grains on fracture behavior, which can be seen in Fig.8c,d as a bandlike distribution. However, the bandlike distribution resulting from the long-range interactions does not seem to be captured by the machine learning framework owing to the characteristic of PixelCNN. Because a global stochastic relationship among the fundamental elements is factorized as a product of stochastic local interactions in PixelCNN as defined in Eq.(1), the extent of interaction exponentially decreases as distance increases. Therefore, the long-range interactions are difficult to be captured by PixelCNN. The discussion of the limitation of PixelCNN in capturing long-range interactions and a remedy for this limitation can be found in28. Figure9 illustrates a sample case showing that the relatively long-range interactions are important for the dertermination of fracture strain. In this case, the determination of the part that critically affects the fracture behavior seems to be difficult using the framework based on PixelCNN.

Sample case showing that a relatively long-range interactions among martensite grains are important for the determination of fracture strain.

For incompletely laminated structures such as that shown in Fig.8a, the martensite layers are expanded to achieve a higher fracture strain even though increasing the martensite volume fraction basically contributes to the decrease in the fracture strain, as shown in Fig.4. Similarly, we can see in Fig.8c that the martensite grains tended to expand to fill the hot spots between them. Additionally, as mentioned above, even though completely laminated structures are structurally similar to incompletely laminated structures, the fracture strains of completely laminated structures are much higher than those of incompletely laminated structures. Thus, eliminating tiny holes that could be causes of hot spots and reaching ({ completely}) laminated structures markedly improve their fracture strains. Altogether, these results imply that the framework recognizes the potential of laminated structures to achieve a higher fracture strain in a similar way that human researchers reach an intuition on completely laminated structures as a result of the consideration of reducing the occurrence of hot spots.

From the above results, we can conclude that our framework can identify the areas that critically affect a target property without human prior knowledge when the local topology of microstructures is dominant for the target property. This implies that machine learning designed consistent with metallurgists process of thinking can approach the background or the meaning of the implicitly extracted knowledge in a similar way that humans acquire an empirical knowledge.

Continue reading here:
Identification of microstructures critically affecting material properties using machine learning framework based on metallurgists' thinking process |...

This Smart Doorbell Responds to Meowing Cats Using Machine Learning and IoT – Hackster.io

Those who own an outdoor cat or even several might run into the occasional problem of having to let them back in. Due to finding it annoying when having to constantly monitor for when his cat wanted to come inside the house, GitHub user gamename opted for a more automated system.

The solution gamename came up with involves listening to ambient sounds with a single Raspberry Pi and an attached USB microphone. Whenever the locally-running machine learning model detects a meow, it sends a message to an AWS service over the internet where it can then trigger a text to be sent. This has the advantage of limiting false events while simultaneously providing an easy way for the cat to be recognized at the door.

This project started by installing the AWS command-line interface (CLI) onto the Raspberry Pi 4 and then signing in with an account. From here, gamename registered a new IoT device, downloaded the resulting configuration files, and ran the setup script. After quickly updating some security settings, a new function was created that waits for new messages coming from the MQTT service and causes a text message to be sent with the help of the SNS service.

After this plethora of services and configurations had been made to the AWS project, gamename moved onto the next step of testing to see if messages are sent at the right time. His test script simply emulates a positive result by sending the certificates, key, topic, and message to the endpoint, where the user can then watch as the text appears on their phone a bit later.

The Raspberry Pi and microSD card were both placed into an off-the-shelf chassis, which sits just inside the house's entrance. After this, the microphone was connected with the help of two RJ45-to-USB cables that allow the microphone to sit outside inside of a waterproof housing up to 150 feet away.

Running on the Pi is a custom bash script that starts every time the board boots up, and its role is to launch the Python program. This causes the Raspberry Pi to read samples from the microphone and pass them to a TensorFlow audio classifier, which attempts to recognize the sound clip. If the primary noise is a cat, then the AWS API is called in order to publish the message to the MQTT topic. More information about this project can be found here in gamename's GitHub repository.

Continue reading here:
This Smart Doorbell Responds to Meowing Cats Using Machine Learning and IoT - Hackster.io

Tackling the reproducibility and driving machine learning with digitisation – Scientific Computing World

Dr Birthe Nielsen discusses the role of the Methods Database in supporting life sciences research by digitising methods data across different life science functions.

Reproducibility of experiment findings and data interoperability are two of the major barriers facing life sciences R&D today. Independently verifying findings by re-creating experiments and generating the same results is fundamental to progressing research to the next stage in its lifecycle, be it advancing a drug to clinical development, or a product to market. Yet, in the field of biology alone, one study found that 70 per cent of researchers are unable to reproduce the findings of other scientists, and 60 per cent are unable to reproduce their own findings.

This causes delays to the R&D process throughout the life sciences ecosystem. For example, biopharmaceutical companies often use an external Contract Research Organisation (CROs) to conduct clinical studies. Without a centralised repository to provide consistent access, analytical methods are often shared with CROs via email or even by physical documents, and not in a standard format but using an inconsistent terminology. This leads to unnecessary variability and several versions of the same analytical protocol. This makes it very challenging for a CRO to re-establish and revalidate methods without a labour-intensive process that is open to human interpretation and thus error.

To tackle issues like this, the Pistoia Alliance launched the Methods Hub project. The project aims to overcome the issue of reproducibility by digitising methods data across different life science functions, and ensuring data is FAIR (Findable, Accessible, Interoperable, Reusable) from the point of creation. This will enable seamless and secure sharing within the R&D ecosystem, reduce experiment duplication, standardise formatting to make data machine-readable, and increase reproducibility and efficiency. Robust data management is also the building block for machine learning and is the stepping-stone to realising the benefits of AI.

Digitisation of paper-based processes increases the efficiency and quality of methods data management. But it goes beyond manually keying in method parameters on a computer or using an Electronic Lab Notebook; A digital and automated workflow increases efficiency, instrument usages and productivity. Applying a shared data standards ensures consistency and interoperability in addition to fast and secure transfer of information between stakeholders.

One area that organisations need to address to comply with FAIR principles, and a key area in which the Methods Hub project helps, is how analytical methods are shared. This includes replacing free-text data capture with a common data model and standardised ontologies. For example, in a High-Performance Liquid Chromatography (HPLC) experiment, rather than manually typing out the analytical parameters (pump flow, injection volume, column temperature etc. etc.), the scientist will simply download a method which will automatically populate the execution parameters in any given Chromatographic Data System (CSD). This not only saves time during data entry, but the common format eliminates room for human interpretation or error.

Additionally, creating a centralised repository like the Methods Hub in a vendor-neutral format is a step towards greater cyber-resiliency in the industry. When information is stored locally on a PC or an ELN and is not backed up, a single cyberattack can wipe it out instantly. Creating shared spaces for these notes via the cloud protects data and ensures it can be easily restored.

A proof of concept (PoC) via the Methods Hub project was recently successfully completed to demonstrate the value of methods digitisation. The PoC involved the digital transfer via cloud of analytical HPLC methods, proving it is possible to move analytical methods securely between two different companies and CDS vendors with ease. It has been successfully tested in labs at Merck and GSK, where there has been an effective transfer of HPLC-UV information between different systems. The PoC delivered a series of critical improvements to methods transfer that eliminated the manual keying of data, reduces risk, steps, and error, while increasing overall flexibility and interoperability.

The Alliance project team is now working to extend the platforms functionality to connect analytical methods with results data, which would be an industry first. The team will also be adding support for columns and additional hardware and other analytical techniques, such as mass spectrometry and nuclear magnetic resonance spectroscopy (NMR). It also plans to identify new use cases, and further develop the cloud platform that enables secure methods transfer.

If industry-wide data standards and approaches to data management are to be agreed on and implemented successfully, organisations must collaborate. The Alliance recognises methods data management is a big challenge for the industry, and the aim is to make Methods Hub an integral part of the system infrastructure in every analytical lab.

Tackling issues such as digitisation of methods data doesnt just benefit individual companies but will have a knock-on effect for the whole life sciences industry. Introducing shared standards accelerates R&D, improves quality, and reduces the cost and time burden on scientists and organisations. Ultimately this ensures that new therapies and breakthroughs reach patients sooner. We are keen to welcome new contributors to the project, so we can continue discussing common barriers to successful data management, and work together to develop new solutions.

Dr Birthe Nielsen is the Pistoia Alliance Methods Database project manager

Read more:
Tackling the reproducibility and driving machine learning with digitisation - Scientific Computing World

Artificial intelligence was supposed to transform health care. It hasn’t. – POLITICO

Companies come in promising the world and often dont deliver, said Bob Wachter, head of the department of medicine at the University of California, San Francisco. When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Administrators say algorithms the software that processes data from outside companies dont always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But its slow going. Research based on job postings shows health care behind every industry except construction in adopting AI.

The Food and Drug Administration has taken steps to develop a model for evaluating AI, but it is still in its early days. There are questions about how regulators can monitor algorithms as they evolve and rein in the technologys detrimental aspects, such as bias that threaten to exacerbate health care inequities.

Sometimes theres an assumption that AI is working, and its just a matter of adopting it, which is not necessarily true, said Florenta Teodoridis, a professor at the University of Southern Californias business school whose research focuses on AI. She added that being unable to understand why an algorithm came to a certain result is fine for things like predicting the weather. But in health care, its impact is potentially life-changing.

Despite the obstacles, the tech industry is still enthusiastic about AIs potential to transform health care.

The transition is slightly slower than I hoped but well on track for AI to be better than most radiologists at interpreting many different types of medical images by 2026, Hinton told POLITICO via email. He said he never suggested that we should get rid of radiologists, but that we should let AI read scans for them.

If hes right, artificial intelligence will start taking on more of the rote tasks in medicine, giving doctors more time to spend with patients to reach the right diagnosis or develop a comprehensive treatment plan.

I see us moving as a medical community to a better understanding of what it can and cannot do, said Lara Jehi, chief research information officer for the Cleveland Clinic. It is not going to replace radiologists, and it shouldnt replace radiologists.

Radiology is one of the most promising use cases for AI. The Mayo Clinic has a clinical trial evaluating an algorithm that aims to reduce the hours-long process oncologists and physicists undertake to map out a surgical plan for removing complicated head and neck tumors.

An algorithm can do the job in an hour, said John D. Halamka, president of Mayo Clinic Platform: Weve taken 80 percent of the human effort out of it. The technology gives doctors a blueprint they can review and tweak without having to do the basic physics themselves, he said.

NYU Langone Health has also experimented with using AI in radiology. The health system has collaborated with Facebooks Artificial Intelligence Research group to reduce the time it takes to get an MRI from one hour to 15 minutes. Daniel Sodickson, a radiological imaging expert at NYU Langone who worked on the research, sees opportunity in AIs ability to downsize the amount of data doctors need to review.

When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Bob Wachter, head of the department of medicine at the University of California, San Francisco

Covid has accelerated AIs development. Throughout the pandemic, health providers and researchers shared data on the disease and anonymized patient data to crowdsource treatments.

Microsoft and Adaptive Biotechnologies, which partner on machine learning to better understand the immune system, put their technology to work on patient data to see how the virus affected the immune system.

The amount of knowledge thats been obtained and the amount of progress has just been really exciting, said Peter Lee, corporate vice president of research and incubations at Microsoft.

There are other success stories. For example, Ochsner Health in Louisiana built an AI model for detecting early signs of sepsis, a life-threatening response to infection. To convince nurses to adopt it, the health system created a response team to monitor the technology for alerts and take action when needed.

Im calling it our care traffic control, said Denise Basow, chief digital officer at Ochsner Health. Since implementation, she said, death from sepsis is declining.

The biggest barrier to the use of artificial intelligence in health care has to do with infrastructure.

Health systems need to enable algorithms to access patient data. Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence. But thats not as easy for smaller players.

Another problem is that every health system is unique in its technology and the way it treats patients. That means an algorithm may not work as well everywhere.

Over the last year, an independent study on a widely used sepsis detection algorithm from EHR giant Epic showed poor results in real-world settings, suggesting where and how hospitals used the AI mattered.

This quandary has led top health systems to build out their own engineering teams and develop AI in-house.

That could create complications down the road. Unless health systems sell their technology, its unlikely to undergo the type of vetting that commercial software would. That could allow flaws to go unfixed for longer than they might otherwise. Its not just that the health systems are implementing AI while no ones looking. Its also that the stakeholders in artificial intelligence, in health care, technology and government, havent agreed upon standards.

A lack of quality data which gives algorithms material to work with is another significant barrier in rolling out the technology in health care settings.

Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence.|Elaine Thompson/AP Photo

Much data comes from electronic health records but is often siloed among health care systems, making it more difficult to gather sizable data sets. For example, a hospital may have complete data on one visit, but the rest of a patients medical history is kept elsewhere, making it harder to draw inferences about how to proceed in caring for the patient.

We have pieces and parts, but not the whole, said Aneesh Chopra, who served as the governments chief technology officer under former President Barack Obama and is now president of data company CareJourney.

While some health systems have invested in pulling data from a variety of sources into a single repository, not all hospitals have the resources to do that.

Health care also has strong privacy protections that limit the amount and type of data tech companies can collect, leaving the sector behind others in terms of algorithmic horsepower.

Importantly, not enough strong data on health outcomes is available, making it more difficult for providers to use AI to improve how they treat patients.

That may be changing. A recent series of studies on a sepsis algorithm included copious details on how to use the technology in practice and documented physician adoption rates. Experts have hailed the studies as a good template for how future AI studies should be conducted.

But working with health care data is also more difficult than in other sectors because it is highly individualized.

We found that even internally across our different locations and sites, these models dont have a uniform performance, said Jehi of the Cleveland Clinic.

And the stakes are high if things go wrong. The number of paths that patients can take are very different than the number of paths that I can take when Im on Amazon trying to order a product, Wachter said.

Health experts also worry that algorithms could amplify bias and health care disparities.

For example, a 2019 study found that a hospital algorithm more often pushed white patients toward programs aiming to provide better care than Black patients, even while controlling for the level of sickness.

Last year, the FDA published a set of guidelines for using AI as a medical device, calling for the establishment of good machine learning practices, oversight of how algorithms behave in real-world scenarios and development of research methods for rooting out bias.

The agency subsequently published more specific guidelines on machine learning in radiological devices, requiring companies to outline how the technology is supposed to perform and provide evidence that it works as intended. The FDA has cleared more than 300 AI-enabled devices, largely in radiology, since 1997.

Regulating algorithms is a challenge, particularly given how quickly the technology advances. The FDA is attempting to head that off by requiring companies to institute real-time monitoring and submit plans on future changes.

But in-house AI isnt subject to FDA oversight. Bakul Patel, former head of the FDAs Center for Devices and Radiological Health and now Googles senior director for global digital health strategy and regulatory affairs, said that the FDA is thinking about how it might regulate noncommercial artificial intelligence inside of health systems, but he adds, theres no easy answer.

FDA has to thread the needle between taking enough action to mitigate flaws in algorithms while also not stifling AIs potential, he said.

Some argue that public-private standards for AI would help advance the technology. Groups, including the Coalition for Health AI, whose members include major health systems and universities as well as Google and Microsoft, are working on this approach.

But the standards they envision would be voluntary, which could blunt their impact if not widely adopted.

Original post:
Artificial intelligence was supposed to transform health care. It hasn't. - POLITICO

Are You Making These Deadly Mistakes With Your AI Projects? – Forbes

Since data is at the heart of AI, it should come as no surprise that AI and ML systems need enough good quality data to learn. In general, a large volume of good quality data is needed, especially for supervised learning approaches, in order to properly train the AI or ML system. The exact amount of data needed may vary depending on which pattern of AI youre implementing, the algorithm youre using, and other factors such as in house versus third party data. For example, neural nets need a lot of data to be trained while decision trees or Bayesian classifiers dont need as much data to still produce high quality results.

So you might think more is better, right? Well, think again. Organizations with lots of data, even exabytes, are realizing that having more data is not the solution to their problems as they might expect. Indeed, more data, more problems. The more data you have, the more data you need to clean and prepare. The more data you need to label and manage. The more data you need to secure, protect, mitigate bias, and more. Small projects can rapidly turn into very large projects when you start multiplying the amount of data. In fact, many times, lots of data kills projects.

Clearly the missing step between identifying a business problem and getting the data squared away to solve that problem is determining which data you need and how much of it you really need. You need enough, but not too much. Goldilocks data is what people often say: not too much, not too little, but just right. Unfortunately, far too often, organizations are jumping into AI projects without first addressing an understanding of their data. Questions organizations need to answer include figuring out where the data is, how much of it they already have, what condition it is in, what features of that data are most important, use of internal or external data, data access challenges, requirements to augment existing data, and other crucial factors and questions. Without these questions answered, AI projects can quickly die, even drowning in a sea of data.

Getting a better understanding of data

In order to understand just how much data you need, you first need to understand how and where data fits into the structure of AI projects. One visual way of understanding the increasing levels of value we get from data is the DIKUW pyramid (sometimes also referred to as the DIKW pyramid) which shows how a foundation of data helps build greater value with Information, Knowledge, Understanding and Wisdom.

DIKW pyramid

With a solid foundation of data, you can gain additional insights at the next information layer which helps you answer basic questions about that data. Once you have made basic connections between data to gain informational insight, you can find patterns in that information to gain understanding of the how various pieces of information are connected together for greater insight. Building on a knowledge layer, organizations can get even more value from understanding why those patterns are happening, providing an understanding of the underlying patterns. Finally, the wisdom layer is where you can gain the most value from information by providing the insights into the cause and effect of information decision making.

This latest wave of AI focuses most on the knowledge layer, since machine learning provides the insight on top of the information layer to identify patterns. Unfortunately, machine learning reaches its limits in the understanding layer, since finding patterns isnt sufficient to do reasoning. We have machine learning, not but the machine reasoning required to understand why the patterns are happening. You can see this limitation in effect any time you interact with a chatbot. While the Machine learning-enabled NLP is really good at understanding your speech and deriving intent, it runs into limitations rying to understand and reason.For example, if you ask a voice assistant if you should wear a raincoat tomorrow, it doesn't understand that youre asking about the weather. A human has to provide that insight to the machine because the voice assistant doesnt know what rain actually is.

Avoiding Failure by Staying Data Aware

Big data has taught us how to deal with large quantities of data. Not just how its stored but how to process, manipulate, and analyze all that data. Machine learning has added more value by being able to deal with the wide range of different types of unstructured, semi-structured or structured data collected by organizations. Indeed, this latest wave of AI is really the big data-powered analytics wave.

But its exactly for this reason why some organizations are failing so hard at AI. Rather than run AI projects with a data-centric perspective, they are focusing on the functional aspects. To gain a handle of their AI projects and avoid deadly mistakes, organizations need a better understanding not only of AI and machine learning but also the Vs of big data. Its not just about how much data you have, but also the nature of that data. Some of those Vs of big data include:

With decades of experience managing big data projects, organizations that are successful with AI are primarily successful with big data. The ones that are seeing their AI projects die are the ones who are coming at their AI problems with application development mindsets.

Too Much of the Wrong Data, and Not Enough of the Right Data is Killing AI Projects

While AI projects start off on the right foot, the lack of the necessary data and the lack of understanding and then solving real problems are killing AI projects. Organizations are powering forward without actually having a real understanding of the data that they need and the quality of that data. This poses real challenges.

One of the reasons why organizations are making this data mistake is that they are running their AI projects without any real approach to doing so, other than using Agile or app dev methods. However, successful organizations have realized that using data-centric approaches focus on data understanding as one of the first phases of their project approaches. The CRISP-DM methodology, which has been around for over two decades, specifies data understanding as the very next thing to do once you determine your business needs. Building on CRISP-DM and adding Agile methods, the Cognitive Project Management for AI (CPMAI) Methodology requires data understanding in its Phase II. Other successful approaches likewise require a data understanding early in the project, because after all, AI projects are data projects. And how can you build a successful project on a foundation of data without running your projects with an understanding of data? Thats surely a deadly mistake you want to avoid.

Read more:
Are You Making These Deadly Mistakes With Your AI Projects? - Forbes