Sunscreen made in Zimbabwe for people with albinism – The Herald

The Herald

Sifelani Tsiko Innovations Editor

A University of Zimbabwe chartered industrial chemist and pharmaceutical nanotechnology expert has developed a low-cost sun cream that not only seeks to protect the skin of people with albinism from the suns radiation but also slows down damages and infections to their skin.

Dr Joey Chifamba, who won a prize for his innovations at the just ended the University of Zimbabwe Research Innovation and Industrialisation Week, told the Herald that his ground breaking product sought to help people living with albinism who suffer from actinic (solar induced) skin damage freckles and sunburn to various skin cancers which shorten their life spans considerably.

No product has ever been developed to protect albinistic persons from actinic damage. The sunscreens that are given to them are designed for white skinned people and do not take into consideration specific conditions and differences found on albinistic skins, he said.

This makes them not very effective and not very suitable especially for all day everyday wear since albinism is a lifelong condition.

Dr Chifamba developed a product range with about 10 different products including lotions, creams, wound healing washes, lip balms and hair protective products.

All the products were made using 5th generation emerging technologies including nanotechnology and biotechnology. The products incorporated zinc and titanium from natural sources and indigenous trees, which made them crucial and suitable for people with albinism in tropical areas.

We employ nanosized metallic oxides sunblocks conjugated together with nano optimised indigenous herbs with antibacterial, antifungal and wound healing effects to create aesthetically pleasing cosmeceutical products for everyday all day use by albinistic persons, the industrial chemist and pharmaceutical nanotechnology expert.

In our innovation we have developed ground-breaking cosmeceuticals which are not only sunscreens but complete actinic damage retarding treatments that consider albinistic skin differences and deal with various symptoms of actinic damage including wrinkles, premature aging, inflammation, bacterial and fungal infections.

The products, he said, were much more affordable and safer.

Dr Chifamba said the products which were developed in consultation with the Albino charity organisation of Zimbabwe and other albino welfare groups were already available to people living with albinism who are registered with the trust.

The UZ Innovation Hub was now supporting Dr Chifamba to further develop his research and innovations.

People with albinism have pale skin due to a pigment disorder that barely protects the skin from the suns radiation.

When exposed to sunlight, the skin of an albino does not acquire a tan. Instead, it remains light and there is a greater risk of skin cancer.

In Zimbabwe and most other African countries, this is an acute problem.

Most sunscreen products that are available in Zimbabwe are imported from South Africa and are expensive.

Retailers sell the lotion at high prices that range from US$22 and $35 for a 250 millilitre bottle of sunscreen lotion.

This is much too expensive for most albinos who use a tube that only lasts a few weeks with intensive usage.

Even with donations for albino welfare organisations, the lotions are still not widely accessible from many Zimbabweans living with albinism, who number an estimated 70 000.

Albinos in Zimbabwe and on the continent still face great difficulties because of the high intensity of the suns radiation there.

In addition, Albinos in most African countries suffer from prejudice and are often rejected by their families.

In other more extreme cases, many have been killed and their bodies dismembered for ritual purposes.

In some parts of Africa, some believe albinos possess magical powers.

Albino rights activists say there is a need to improve access to skin care products for this population and promote policies that could make sunscreen easier to get and more affordable.

For years, Albino rights organisations in Zimbabwe have been lobbying the government to reduce the price of sunscreen lotions and even make them free in health facilities.

Read more here:

Sunscreen made in Zimbabwe for people with albinism - The Herald

Researchers 3D print high-performance nanostructured alloy that’s both ultrastrong and ductile – Nanowerk

Aug 03, 2022(Nanowerk News) Researchers at the University of Massachusetts Amherst and the Georgia Institute of Technology have 3D printed a dual-phase, nanostructured high-entropy alloy that exceeds the strength and ductility of other state-of-the-art additively manufactured materials, which could lead to higher-performance components for applications in aerospace, medicine, energy and transportation.The work, led by Wen Chen, assistant professor of mechanical and industrial engineering at UMass, and Ting Zhu, professor of mechanical engineering at Georgia Tech, is published by the journal Nature ("Strong yet ductile nanolamellar high-entropy alloys by additive manufacturing").Wen Chen, assistant professor of mechanical and industrial engineering at UMass Amherst, stands in front of images of 3D printed high-entropy alloy components (heatsink fan and octect lattice, left) and a cross-sectional electron backscatter diffraction inverse-pole figure map demonstrating a randomly oriented nanolamella microstructure (right).(Image: UMass Amherst)Over the past 15 years, high entropy alloys (HEAs) have become increasingly popular as a new paradigm in materials science. Comprised of five or more elements in near-equal proportions, they offer the ability to create a near-infinite number of unique combinations for alloy design. Traditional alloys, such as brass, carbon steel, stainless steel and bronze, contain a primary element combined with one or more trace elements.Additive manufacturing, also called 3D printing, has recently emerged as a powerful approach to material development. The laser-based 3D printing can produce large temperature gradients and high cooling rates that are not readily accessible by conventional routes. However, the potential of harnessing the combined benefits of additive manufacturing and HEAs for achieving novel properties remains largely unexplored, says Zhu.Chen and his team in the Multiscale Materials and Manufacturing Laboratory combined an HEA with a state-of-the-art 3D printing technique called laser powder bed fusion to develop new materials with unprecedented properties. Because the process causes materials to melt and solidify very rapidly as compared to traditional metallurgy, you get a very different microstructure that is far-from-equilibrium on the components created, Chen says.This microstructure looks like a net and is made of alternating layers known as face-centered cubic (FCC) and body-centered cubic (BCC) nanolamellar structures embedded in microscale eutectic colonies with random orientations. The hierarchical nanostructured HEA enables co-operative deformation of the two phases.This unusual microstructures atomic rearrangement gives rise to ultrahigh strength as well as enhanced ductility, which is uncommon, because usually strong materials tend to be brittle, Chen says. Compared to conventional metal casting, we got almost triple the strength and not only didnt lose ductility, but actually increased it simultaneously, he says. For many applications, a combination of strength and ductility is key. Our findings are original and exciting for materials science and engineering alike.The ability to produce strong and ductile HEAs means that these 3D printed materials are more robust in resisting applied deformation, which is important for lightweight structural design for enhanced mechanical efficiency and energy saving, says Jie Ren, Chens Ph.D. student and first author of the paper.Zhus group at Georgia Tech led the computational modeling for the research. He developed dual-phase crystal plasticity computational models to understand the mechanistic roles played by both the FCC and BCC nanolamellae and how they work together to give the material added strength and ductility.Our simulation results show the surprisingly high strength yet high hardening responses in the BCC nanolamellae, which are pivotal for achieving the outstanding strength-ductility synergy of our alloy. This mechanistic understanding provides an important basis for guiding the future development of 3D printed HEAs with exceptional mechanical properties, Zhu says.In addition, 3D printing offers a powerful tool to make geometrically complex and customized parts. In the future, harnessing 3D printing technology and the vast alloy design space of HEAs opens ample opportunities for the direct production of end-use components for biomedical and aerospace applications.Additional research partners on the paper include Texas A&M University, the University of California Los Angeles, Rice University, and Oak Ridge and Lawrence Livermore national laboratories.

Continued here:

Researchers 3D print high-performance nanostructured alloy that's both ultrastrong and ductile - Nanowerk

This Curvy Quantum Physics Discovery Could Revolutionize Our Understanding of Reality – The Debrief

A recent discovery in the field of quantum physics by researchers at Purdue University has opened the doorway to a whole new way of looking at our physical reality.

According to the researchers involved, an all-new technique that can allow the creation of curved surfaces that behave like flat ones may completely revolutionize our understanding of curvature and distance, as well as our knowledge of quantum physics.

As a fundamental principle, if one wants to create a curved surface even at the microscopic level, one must start with a flat surface and bend it. Although this may seem self-evident, suchprinciples are critical guidelines for researchers who work in quantum mechanics, information processing, astrophysics, and a whole host of scientific disciplines.

However, according to the Purdue research team behind this latest discovery, they have discovered a way to break that law, resulting in a curved space that behaves at a quantum level like a flat one. Thediscovery is, in short, something that appears to break the sorts of fundamental rules many physicists take for granted.

Our work may revolutionize the general publics understanding of curvatures and distance, said Qi Zhou, a Professor of Physics and Astronomy who is also a co-author of the paper announcing the research teams potentially groundbreaking results. It has also answered long-standing questions in non-Hermitian quantum mechanics by bridging non-Hermitian physics and curved spaces.

Published in the journal Nature Communications, the paper and its authors explain that the discovery involves the construction of curved surfaces that behave like flat ones, particularly at the quantum level, resulting in a system they describe as non-Hermitian.

For example, quantum particles on a theoretical lattice can hop from one location to another instantaneously. If the chances of that particle hopping either left or right is equal, then that system is referred to as Hermitian. However, if the odds are unequal, then the system is non-Hermitian.

Typical textbooks of quantum mechanics mainly focus on systems governed by Hamiltonians that are Hermitian, said graduate student Chenwei Lv, who is also the lead author of the paper. As a result, the team notes that there is very little literature about their discovery.

A quantum particle moving in a lattice needs to have an equal probability to tunnel along the left and right directions, Lv explains before offering examples where certain systems lose this equal probability. In such non-Hermitian systems, familiar textbook results no longer apply, and some may even look completely opposite to that of Hermitian systems.

Lv and the Purdue team found that a non-Hermitian system actually curved the space where a quantum particle resides. In that case, they explain, a quantum particle in a lattice with nonreciprocal tunneling is actually moving on a curved surface. Lv notes that these types of non-Hermitian systems are in sharp contrast to what first-year undergraduate quantum physics students are taught from day one of their education.

These extraordinary behaviors of non-Hermitian systems have been intriguing physicists for decades, Lv adds, but many outstanding questions remain open.

Professor Ren Zhang from Xian Jiaotong University, who was a co-author of the study, says that their research and its unexpected results have implications in two distinct areas.

On the one hand, it establishes non-Hermiticity as a unique tool to simulate intriguing quantum systems in curved spaces, he explained. Most quantum systems available in laboratories are flat, and it often requires significant efforts to access quantum systems in curved spaces.That non-Hermiticity, adds Zhang, offers experimentalists an extra knob to access and manipulate curved spaces.

On the other hand, says Zhang, the duality allows experimentalists to use curved spaces to explore non-Hermitian physics. For instance, our results provide experimentalists a new approach to access exceptional points using curved spaces and improve the precision of quantum sensors without resorting to dissipations.

The research team notes that their discovery could assist researchers across a wide array of disciplines, with future research spinning off in multiple directions.

First, those who study curved spaces could implement the Purdue teams apparatuses, while physicists working on non-Hermitian systems could tailor dissipations to access non-trivial curved spaces that cannot be easily obtained by conventional means.

In the end, Lv points to the broader implications of their discovery and its place in the world of quantum physics.

The extraordinary behaviors of non-Hermitian systems, which have puzzled physicists for decades, become no longer mysterious if we recognize that the space has been curved, said Lv.

In other words, non-Hermiticity and curved spaces are dual to each other, being the two sides of the same coin.

Connect with Author Christopher Plain on Twitter @plain_fiction

Read more here:

This Curvy Quantum Physics Discovery Could Revolutionize Our Understanding of Reality - The Debrief

Quantum trailblazer – News Center – The University of Texas at Arlington – uta.edu

Wednesday, Aug 03, 2022 Linsey Retcofsky : Contact

Weeks into summer break, a classroom door opened onto a quiet hallway at Martin High School in Arlington, where a crowd of students waited.

What is something that has confused you this week? asked Victor Cervantes, an alumnus of the UTeach program at The University of Texas at Arlington and an AP physics teacher. Students wrote their answers on sticky notes and stuck them to butcher paper hanging from the wall. Many of the colorful papers read wave-particle duality.

A group of more than 50 high school students and teachers was meeting to attend workshops in quantum information science (QIS) led by Karen Jo Matsler, assistant professor in practice at UTA. Many of the weeks lessons were guiding them through uncharted territory.

In 2021, the National Science Foundation awarded Matsler and collaborators a nearly $1 million grant to launch Quantum for All, a three-year QIS program for high school teachers. Key to workforce preparation, quantum principles intersect with numerous industries, impacting global communication methods, technology, innovation, health care, issues of national security and more. As an emerging field, QIS is excluded from many high school courses.

Quantum skills are integral to the development of a globally competitive workforce, Matsler said. If students have never heard of these concepts before they enter college, they likely wont choose to study them at advanced levels.

Jonathan Lewis

Open to high school teachers from across the country, the program capitalizes on familiar content areas in instructors existing curricula and teaches them how to incorporate quantum principles into lesson plans. During summer breaks, participants gather for intensive workshops where they practice teaching the new subject.

At the beginning of the day, Matthew Quiroz, a physics and astronomy teacher at Ysleta High School in El Paso, Texas, gathered materials from a 3D printer. The day before, the students were given parameters for an experiment and told to design and 3D-print the tools they lacked.

Jonathan Lewis, a junior in Martin High Schools STEM academy, paired with a friend to lead his cohorts design.

We needed to design a rotating stand to hold a small polarizer, Lewis said. Our group brainstormed ideas and then designed the 3D model in Tinkercad, a software I had never used. Going through the stages of this project has been a lot of fun.

Using the student-made tools, Quiroz guided the class through an experiment testing how varying polarizer angles affect the brightness of light. As students examined the polarization of photons, he introduced them to the quantum concepts of superposition, states and probability.

Matsler, a clear-eyed veteran with infectious enthusiasm, is the science teacher everyone wishes they had in high school. Throughout the morning she bounced between classrooms, cheering her pupils through lessons in physics, cryptography and coding.

Students on computers used the modeling software Glowscript to code a physics simulation, where two balls, one constant and one accelerating, traveled through space. Although both were released at the same time, the accelerating ball traveled farther and faster.

Are we having fun, yall? she said.

Among students and teachers, the enthusiasm was palpable. Many instructors had traveled from across Texas and the southern United States to attend Matslers workshops. Jacqueline Edwards, a science teacher at McAdory High School in McCalla, Alabama, said studying quantum concepts reminded her that she and her colleagues are lifelong learners.

We are all learning about the process of trial and error, she said. Thats the essence of scientific inquiry. We dont fail, we just try again.

Cori Davis, a biomedical science teacher in Martin High Schools STEM academy, discovered how to incorporate quantum information into her forensics curriculum.

Quantum principles have broad applications, Davis said. In our unit on forensic science, I can apply these lessons to how we understand projectile motion when examining bullet wounds and blood splatter.

Matsler argues that small modifications to lesson plans in math, chemistry, technology and other science and engineering courses enable teachers to easily integrate quantum theory into their syllabi.

Make no mistake, she said, quantum principles arent only important for physics teachers.

Most K-12 educators are not prepared to teach QIS because they didnt study advanced physics in college, Matsler said. These workshops democratize quantum principles, making them accessible to teachers of a variety of science, technology, mathematics and engineering courses.

The rest is here:

Quantum trailblazer - News Center - The University of Texas at Arlington - uta.edu

Physics Ph.D. student wins Best Speaker Award at international conference in Spain – Ohio University

Physics doctoral student Eva Yazmin Santiago Santos received a prestigious Best Speaker Award at a large international conference in Spain for her talk describing hot-electron generation of nanoparticles.

"I had presented a poster and given a virtual talk at other international conferences before. However, this was my first in-person oral presentation at an international conference. This was also my first invited talk, so it made it even more special," Santiago Santos said.

More than 600 physicists attended META 2022, the 12th International Conference on Metamaterials, Photonic Crystals and Plasmonics held July 19 - 22 in Torremolinos, Spain. META is the worlds leading conference on nanophotonics and metamaterials, reporting on various current hot topics such as metasurfaces and metadevices, topological effects in photonics, two-dimensional quantum materials, light-matter interaction, plasmonic nanodevices, heat engineering, and quantum-information systems.

"The highlight of the conference was meeting people that work in a similar field as ours in different parts of the world," Santiago Santos said. "In particular, I really enjoyed interacting with some of the people we have collaborated with in previous projects but had never met in person before."

Her faculty mentor is Alexander Govorov, Distinguished Professor of Physics & Astronomy in the College of Arts and Sciences. The additional colleagues she's collaborating with on her work include Lucas V. Besteiro from the Universidade de Vigo in Spain, Xiang-Tian Kong from Nankai University in China, Miguel A. Correa-Duarte from the Universidade de Vigo in Spain, and Prof. Zhiming Wang from the University of Electronic Science and Technology of China.

Santiago Santos'sresearch is in computational physics of nanostructures for optical, energy, and sensor applications, and the title of her talk was "Generation of hot electrons in plasmonic nanoparticles with complex shapes."

"The generation of hot electrons in plasmonic nanoparticles is an intrinsic response to light, which strongly depends on the nanoparticle shape, material, and excitation wavelength," she said. "In this study, we present a formalism that describes the hot-electron generation for gold nanospheres, nanorods and nanostars. Among them, the nanostars are the most efficient, with an internal energy efficiency of approximately 25 percent, owing to multiple factors, including the presence of hot spots," Santiago Santos said.

Read more:

Physics Ph.D. student wins Best Speaker Award at international conference in Spain - Ohio University

Schrdinger Believed That There Was Only One Mind in the Universe – Walter Bradley Center for Natural and Artificial Intelligence

Consciousness researcher Robert Prentner and cognitive psychologist will tell a prestigious music and philosophy festival in London next month that great physicist Donald Hoffman, quantum physicist Erwin Schrdinger (18871961) believed that The total number of minds in the universe is one. That is, a universal Mind accounts for everything.

In a world where many scientists strive mightily to explain how the human mind can arise from non-living matter, Prentner and Hoffman will tell the HowtheLightGetsIn festival in London (September 1718, 2022) that the author of the famous Cat paradox was hardly a materialist:

In 1925, just a few months before Schrdinger discovered the most basic equation of quantum mechanics, he wrote down the first sketches of the ideas that he would later develop more thoroughly in Mind and Matter. Already then, his thoughts on technical matters were inspired by what he took to be greater metaphysical (religious) questions. Early on, Schrdinger expressed the conviction that metaphysics does not come after physics, but inevitably precedes it. Metaphysics is not a deductive affair but a speculative one.

Inspired by Indian philosophy, Schrdinger had a mind-first, not matter-first, view of the universe. But he was a non-materialist of a rather special kind. He believed that there is only one mind in the universe; our individual minds are like the scattered light from prisms:

A metaphor that Schrdinger liked to invoke to illustrate this idea is the one of a crystal that creates a multitude of colors (individual selves) by refracting light (standing for the cosmic self that is equal to the essence of the universe). We are all but aspects of one single mind that forms the essence of reality. He also referred to this as the doctrine of identity. Accordingly, a non-dual form of consciousness, which must not be conflated with any of its single aspects, grounds the refutation of the (merely apparent) distinction into separate selves that inhabit a single world.

But in Mind and Matter (1958), Schrdinger, we are told, took this view one step further:

Schrdinger drew remarkable consequences from this. For example, he believed that any man is the same as any other man that lived before him. In his early essay Seek for the Road, he writes about looking into the mountains before him. Thousands of years ago, other men similarly enjoyed this view. But why should one assume that oneself is distinct from these previous men? Is there any scientific fact that could distinguish your experience from another mans? What makes you you and not someone else? Similarly as John Wheeler once assumed that there is really only one electron in the universe, Schrdinger assumed that there really is only one mind. Schrdinger thought this is supported by the empirical fact that consciousness is never experienced in the plural, only in the singular. Not only has none of us ever experienced more than one consciousness, but there is also no trace of circumstantial evidence of this ever happening anywhere in the world.

Most non-materialists will wish they had gotten off two stops ago. We started with Mind first, which when accounting for why there is something rather than nothing has been considered a reasonable assumption throughout history across the world (except among materialists). But the assumption that no finite mind could experience or act independently of the Mind behind the universe is a limitation on the power of that Mind. Why so?

Its not logically clear and logic is our only available instrument here why the original Mind could not grant to dogs, chimpanzees, and humans the power to apprehend and act as minds in their own right in their natural spheres not simply as seamless extensions of the universal Mind.

With humans, the underlying assumptions of Schrdingers view are especially problematic. Humans address issues of good and evil. If Schrdinger is right, for example, Dr. Martin Luther King, and Comrade Josef Stalin are really only one mind because each experienced only his own consciousness. But wait. As a coherent human being, each could only have experienced his own consciousness and not the other mans.

However, that doesnt mean that they were mere prisms displaying different parts of the spectrum of broken light. The prism analogy fails to take into account that humans can act for good or ill. Alternatively, it is saying that good and evil, as we perceive them, are merely different colors in a spectrum. As noted earlier, many of us should have got off two stops ago

In any event, Schrdingers views are certain to be an interesting discussion at HowLightGetsIn.

Schrdinger was hardly the only modern physicist or mathematician to dissent from materialism. Mathematician Kurt Gdel (19061978), to take one example, destroyed a popular form of atheism (logical positivism) via his Incompleteness Theorems.

The two thinkers held very different views, of course. But both saw the fatal limitations of materialism (naturalism) and they addressed these limitations quite differently. In an age when Stephen Hawkings disdain for philosophy is taken to be representative of great scientists, its a good thing if festivals like HowLightGetsIn offer a broader perspective and corrective.

You may also wish to read: Why panpsychism is starting to push out naturalism. A key goal of naturalism/materialism has been to explain human consciousness away as nothing but a pack of neurons. That cant work. Panpsychism is not a form of dualism. But, by including consciousness especially human consciousness as a bedrock fact of nature, it avoids naturalisms dead end.

Original post:

Schrdinger Believed That There Was Only One Mind in the Universe - Walter Bradley Center for Natural and Artificial Intelligence

Elon Musk and Mark Zuckerberg Are Arguing About AI — But They’re Both Missing the Point – Entrepreneur

Free Webinar | August 16th

Find out how to optimize your website to give your customers experiences that will have the biggest ROI for your business. Register Now

In Silicon Valley this week, a debate about the potential dangers (or lack thereof) when it comes to artificial intelligencehas flared upbetween two tech billionaires.

Facebook CEO Mark Zuckerberg thinks that AI is going to make our lives better in the future,while SpaceX CEO Elon Musk believes that AI a fundamental risk to the existence of human civilization.

Whos right?

Related: Elon Musk Says Mark Zuckerberg's Understanding of AI Is 'Limited' After the Facebook CEO Called His Warnings 'Irresponsible'

Theyre both right, but theyre also both missing the point. The dangerous aspect of AI will always come from people and their use of it, not from the technology itself. Similar to advances in nuclear fusion, almost any kind of technological developments can be weaponized and used to cause damage if in the wrong hands. The regulation of machine intelligence advancements will play a central role in whether Musks doomsday prediction becomes a reality.

It would be wrong to say that Musk is hesitant to embrace the technology since all of this companies are direct beneficiaries of the advances in machine learning. Take Tesla for example, where self-driving capability is one of the biggest value adds for its cars. Musk himself even believes that one day it will be safer to populate roads with AI drivers rather than human ones, though publicly he hopes that society will not ban human drivers in the future in an effort to save us from human error.

What Musk is really pushing for here by being wary of AI technology is a more advanced hypothetical framework that we as a society should use to have more awareness regarding the threats that AI brings. Artificial General Intelligence (AGI), the kind that will make decisions on its own without any interference or guidance from humans, is still very far away from how things work today. The AGI that we see in the movies where robots take over the planet and destroy humanity is very different from the narrow AI that we use and iterate on within the industry now. In Zuckerbergs view, the doomsday conversation that Musk has sparked is a very exaggerated way of projecting how the future of our technology advancements would look like.

Related: The Future of Productivity: AI and Machine Learning

While there is not much discussion in our government about apocalypse scenarios, there is definitely a conversation happening about preventing the potentially harmful impacts on society from artificial intelligence. White House recently released a couple of reports on the future of artificial intelligence and on the economic effects it causes. The focus of these reports is on the future of work, job marketsand research on increasing inequality that machine intelligence may bring.

There is also an attempt to tackle a very important issue of explainability when it comes to understanding actions that machine intelligence does and decisions it presents to us. For example, DARPA (Defense Advanced Research Projects Agency), an agency within the U.S. Department of Defense, is funneling billions of dollars into projects that would pilot vehicles and aircraft, identify targets and even eliminate them on autopilot. If you thought the use of drone warfare was controversial, AI warfare will be even more so. Thats why here its even more important, maybe even more than in any other field, to be mindful of the results AI presents.

Explainable AI (XAI), the initiative funded by DARPA, aims to create a suite of machine learning techniques that produce more explainable results to human operators and still maintain a high level of learning performance. The other goal of XAI is to enable human users to understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.

Related: Would You Fly on an AI-Backed Plane Without a Pilot?

The XAI initiative can also help the government tackle the problem of ethics with more transparency. Sometimes developers of software have conscious or unconscious biases that eventually are built into an algorithm -- the wayNikon camera became internet famous for detecting someone blinking when pointed at the face of an Asian personor HP computers were proclaimed racist for not detecting black faces on the camera. Even developers with the best intentions can inadvertently produce systems with biased results, which is why, as the White House report states,AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias.

Even with the positive use cases, the data bias can cause a lot of serious harm to society. Take Chinas recent initiative to use machine intelligence to predict and prevent crime. Of course, it makes sense to deploy complex algorithms that can spot a terrorist and prevent crime, but a lot of bad scenarios can happen if there is an existing bias in the training data for those algorithms.

It important to note that most of these risks already exist in our lives in some form or another, like when patients are misdiagnosed with cancer and not treated accordingly by doctors or when police officers make intuitive decisions under chaotic conditions. The scale and lack of explainability of machine intelligence will magnify our exposure to these risks and raise a lot of uncomfortable ethical questions like, who is responsible for a wrong prescription by an automated diagnosing AI? A doctor? A developer? Training data provider? This is why complex regulation will be needed to help navigate these issues and provide a framework for resolving the uncomfortable scenarios that AI will inevitably bring into society.

Artur Kiulian, M.S.A.I., is a partner at Colab, a Los Angeles-based venture studio that helps startups build technology products using the benefits of machine learning. An expert in artificial intelligence, Kiulian is the author of Robot is...

Follow this link:

Elon Musk and Mark Zuckerberg Are Arguing About AI -- But They're Both Missing the Point - Entrepreneur

Here’s your dose of AI-generated uncanny valley for today – The Verge

As we get better at making, faking, and manipulating human faces with machine learning, one thing is abundantly clear: things are going to get ~freaky~ fast.

Case in point: this online demo hosted (and, we presume, made) by web developer AlteredQualia. It combines two different research projects, both of which use neural networks. The first is DeepWarp, which alters where subjects in photographs are looking, and the second is a work in progress by Mike Tyka dubbed Portraits of Imaginary People. This does exactly what it says on the tin: feeding a generative neural network with a bunch of faces and getting it to create similar samples.

Combine it with a tool for making eyes follow your cursor, and you have a healthy slice of the uncanny valley, the phenomenon of human perception where something looks human but not quite human enough. Here are some more examples from Tykas project:

As weve written in the past, this sort of image is only going to become more common as machine learning and AI proliferate. Neural networks are easy enough for lots of people to play with, and are improving all the time. In this case, thats going to mean more and more near-photorealistic and photorealistic fake humans. If the artificial intelligence boom were currently experiencing has to have a face, this is it.

See original here:

Here's your dose of AI-generated uncanny valley for today - The Verge

What the White House’s ‘AI Bill of Rights’ blueprint could mean for HR tech – HR Dive

Over the last decade, the use of artificial intelligence in areas like hiring, recruiting and workplace surveillance has shifted from a topic of speculation to a tangible reality for many workplaces. Now, those technologies have the attention of the highest office in the land.

On Oct. 4, the White Houses Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights, a 73-page document outlining guidance on addressing bias and discrimination in automated technologies so that protections are embedded from the beginning, where marginalized communities have a voice in the development process, and designers work hard to ensure the benefits of technology reach all people.

The blueprint focuses on five areas of protections for U.S. citizens in relation to AI: system safety and effectiveness; algorithmic discrimination; data privacy; notice and explanation when an automated system is used; and access to human alternatives when appropriate. It also follows the publication in May of two cautionary documents by the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice specifically addressing the use of algorithmic decision-making tools in hiring and other employment actions.

Employment is listed in the blueprint as one of several sensitive domains deserving of enhanced data and privacy protections. Individuals handling sensitive employment information should ensure it is only used for functions strictly necessary for that domain while consent for all non-necessary functions should be optional.

Additionally, the blueprint states that continuous surveillance and monitoring systems should not be used in physical or digital workplaces, regardless of a persons employment status. Surveillance is particularly sensitive in the union context; the blueprint notes that federal law requires employers, and any consultants they may retain, to report the costs of surveilling employees in the context of a labor dispute, providing a transparency mechanism to help protect worker organizing.

The prevalence of employment-focused AI and automation may depend on the size and type of organization studied, though research suggests a sizable portion of employers have adopted the tech.

For example, a February survey by the Society for Human Resource Management found that nearly one-quarter of employers used such tools, including 42% of employers with more than 5,000 employees. Of all respondents utilizing AI or automation, 79% said they were using this technology for recruitment and hiring, the most common such application cited, SHRM said.

Similarly, a 2020 Mercer study found that 79% of employers were either already using, or planned to start using that year, algorithms to identify top candidates based on publicly available information. But AI has applications extending beyond recruiting and hiring. Mercer found that most respondents said they were also using the tech to handle employee self-service processes, conduct performance management and onboard workers, among other needs.

Employers should note that the blueprint is not legally binding, does not constitute official U.S. government policy and is not necessarily indicative of future policy, said Niloy Ray, shareholder at management-side firm Littler Mendelson. Though the principles contained in the document may be appropriate for AI and automation systems to follow, the blueprint is not prescriptive, he added.

It helps add to the scholarship and thought leadership in the area, certainly, Ray said. But it does not rise to the level of some law or regulation.

Employers may benefit from a single federal standard for AI technologies, Ray said, particularly given that this is an active legislative area for a handful of jurisdictions. A New York City law restricting the use of AI in hiring will take effect next year. Meanwhile, a similar law has been proposed in Washington, D.C., and Californias Fair Employment and Housing Council has proposed regulations on the use of automated decision systems.

Then there is the international regulatory landscape, which can pose even more challenges, Ray said. Because of the complexity involved, Ray added that employers might want to see more discussion around a unified federal standard, and the Biden administrations blueprint may be a way of jump-starting that discussion.

Lets not have to jump through 55 sets of hoops, Ray said of the potential for a federal standard. Lets have one set of hoops to jump through.

The blueprints inclusion of standards around data privacy and other areas may be important for employers to consider, as AI and automation platforms used for hiring often take into account publicly available data that job candidates do not realize is being used for screening purposes, said Julia Stoyanovich, co-founder and director at New York Universitys Center for Responsible AI.

Stoyanovich is co-author on an August paper in which a group of NYU researchers detailed their analysis of two personality tests used by two automated hiring vendors, Humantic AI and Crystal. The analysis found that the platforms exhibited substantial instability on key facets of measurement and concluded that they cannot be considered valid personality assessment instruments.

Even before AI is introduced into the equation, the idea that a personality profile of a candidate could be a predictor of job performance is a controversial one, Stoyanovich said. Laws like New York Citys could help to provide more transparency on how automated hiring platforms work, she added, and could provide HR teams a better idea of whether tools truly serve their intended purposes.

The fact that we are starting to regulate this space is really good news for employers, Stoyanovich said. We know that there are tools that are proliferating that dont work, and it doesnt benefit anyone except for the companies that are making money selling these tools.

See the original post here:

What the White House's 'AI Bill of Rights' blueprint could mean for HR tech - HR Dive

A Humanoid Robot Gave a Lecture in a West Point Philosophy Course

Professor Robot

A teacher with a robotic voice can make paying attention in class seem like an impossible task. But students at West Point seemingly had no problem staying focused while learning from an actual robot.

On Tuesday, an AI-powered robot named Bina48 co-taught two sessions of an intro to ethics philosophy course at the prominent military school. And while it might not have a career ahead of it as a college professor, the robot could find itself one day helping mold the minds of younger or less-educated students.

Bina 2.0

To prepare Bina48 to co-teach the West Point students, the bot’s developers fed it information on war theory and political philosophy, as well as the course lesson plan. When it was the robot’s turn to teach, Bina48 delivered a lecture based on this background information before taking questions from students, who seemed to appreciate their time learning from the bot.

“Before the class, they thought it might be too gimmicky or be entertainment,” William Barry, the course’s professor, told Axios. “They were blown away because she was able to answer questions and reply with nuance. The interesting part was that [the cadets] were taking notes.”

AI Education

Bina48 may have shared a few points worth jotting down, but it wasn’t able to teach at the students’ typical pace. In the future, the bot might be a better fit for classes with younger or less-educated students.

Indeed, the world is facing a shortage of teachers, and others have suggested letting AIs fill in in places where flesh-and-bone educators are scarce. Ultimately, Bina48’s work with the West Point cadets could foreshadow a future in which AIs teach students across the globe about everything from ethics to energy.

READ MORE: This Robot Co-Taught a Course at West Point [Axios]

More on Bina48: Six Life-Like Robots That Prove the Future of Human Evolution Is Synthetic

Read more:

A Humanoid Robot Gave a Lecture in a West Point Philosophy Course

Artificial Intelligence: How realistic is the claim that AI will change our lives? – Bangkok Post

Artificial Intelligence: How realistic is the claim that AI will change our lives?

Artificial Intelligence (AI) stakes a claim on productivity, corporate dominance, and economic prosperity with Shakespearean drama. AI will change the way you work and spend your leisure time and puts a claim on your identity.

First, an AI primer.

Let's define intelligence, before we get onto the artificial kind. Intelligence is the ability to learn. Our senses absorb data about the world around us. We can take a few data points and make conceptual leaps. We see light, feel heat, and infer the notion of "summer."

Our expressive abilities provide feedback, i.e., our data outputs. Intelligence is built on data. When children play, they engage in endless feedback loops through which they learn.

Computers too, are deemed intelligent if they can compute, conceptualise, see and speak. A particularly fruitful area of AI is getting machines to enjoy the same sensory experiences that we have. Machines can do this, but they require vast amounts of data. They do it by brute force, not cleverness. For example, they determine the image of a cat by breaking pixel data into little steps and repeat until done.

Key point: What we do and what machines do is not so different, but AI is more about data and repetition than it is about reasoning. Machines figure things out mathematically, not visually.

AI is a suite of technologies (machines and programs) that have predictive power, and some degree of autonomous learning.

AI consists of three building blocks:

An algorithm is a set of rules to be followed when solving a problem. The speed of the volume of data that can be fed into algorithms is more important than the "smartness" of algorithms.

Let's examine these three parts of the AI process:

The raw ingredient of intelligence is data. Data is learning potential. AI is mostly about creating value through data. Data has become a core business value when insights can be extracted. The more you have, the more you can do. Companies with a Big Data mind-set don't mind filtering through lots of low value data. The power is in the aggregation of data.

Building quality datasets for input is critical too, so human effort must first be spent obtaining, preparing and cleaning data. The computer does the calculations and provides the answers, or output.

Conceptually, Machine Learning (ML) is the ability to learn a task without being explicitly programmed to do so. ML encompasses algorithms and techniques that are used in classification, regression, clustering or anomaly detection.

ML relies on feedback loops. The data is used to make a model, and then test how well that model fits the data. The model is revised to make it fit the data better, and repeated until the model cannot be improved anymore. Algorithms can be trained with past data to find patterns and make predictions.

Key point: AI expands the set of tools that we have to gain a better grasp of finding trends or structure in data, and make predictions. Machines can scale way beyond human capacity when data is plentiful.

Prediction is the core purpose of ML. For example, banks want to predict fraudulent transactions. Telecoms want to predict churn. Retailers want to predict customer preferences. AI-enabled businesses make their data assets a strategic differentiator.

Prediction is not just about the future; it's about filling in knowledge gaps and reducing uncertainty. Prediction lets us generalise, an essential form of intelligence. Prediction and intelligence are tied at the hip.

Let's examine the wider changes unfolding.

AI increases our productivity. The question is how we distribute the resources. If AI-enhanced production only requires a few people, what does that mean for income distribution? All the uncertainties are on how the productivity benefits will be distributed, not how large they will be.

Caution:

ML is already pervasive in the internet. Will the democratisation of access brought on by the internet continue to favour global monopolies? Unprecedented economic power rests in a few companies you can guess which ones with global reach. Can the power of channelling our collective intelligence continue to be held by these companies that are positioned to influence our private interests with their economic interests?

Nobody knows if AI will produce more wealth or economic precariousness. Absent various regulatory measures, it is inevitable that it will increase inequality and create new social gaps.

Let's examine the impact on everyone.

As with all technology advancements, there will be changes in employment: the number of people employed, the nature of jobs and the satisfaction we will derive from them. However, with AI all classes of labour are under threat, including management. Professions involving analysis and decision-making will become the providence of machines.

New positions will be created, but nobody really knows if new jobs will sufficiently replace former ones.

We will shift more to creative or empathetic pursuits. To the extent of income shortfall, should we be rewarded for contributing in our small ways to the collective intelligence? Universal basic income is one option, though it remains theoretical.

Our consumption of data (mobile phones, web-clicks, sensors) provides a digital trail that is fed into corporate and governmental computers. For governments, AI opens new doors to perform surveillance, predictive policing, and social shaming. For corporates, it's not clear whether surveillance capitalism, the commercialisation of your personal data, will be personalised to you, or for you. Will it direct you where they want you to go, rather than where you want to go?

How will your data be a measure of you?

The interesting angle emerging is whether we will be hackable. Thats when the AI knows more about you than yourself. At that point you become completely influenceable because you can be made to think and to react as directed by governments and corporates.

We do need artificial forms of intelligence because our prediction abilities are limited, especially when handling big data and multiple variables. But for all its stunning accomplishments, AI remains very specific. Learning machines are circumscribed to very narrow areas of learning. The Deep Mind that wins systematically at Go can't eat soup with a spoon or predict the next financial crises.

Filtering and personalisation engines have the potential to both accommodate and exploit our interests. The degree of change will be propelled, and restrained, by new regulatory priorities. The law always lags behind technology, so expect the slings and arrows of our outrageous fortune.

Author: Greg Beatty, J.D., Business Development Consultant. For further information please contact gregfieldbeatty@gmail.com

Series Editor: Christopher F. Bruton, Executive Director, Dataconsult Ltd, chris@dataconsult.co.th. Dataconsult's Thailand Regional Forum provides seminars and extensive documentation to update business on future trends in Thailand and in the Mekong Region.

Visit link:

Artificial Intelligence: How realistic is the claim that AI will change our lives? - Bangkok Post

The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61…

New York, Sept. 30, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "North America Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Regional Analysis by Diagnostic Tool ; Application ; End User ; Service ; and Country" - https://www.reportlinker.com/p05974389/?utm_source=GNW

The healthcare industry has always been a leader in innovation.The constant mutating of diseases and viruses makes it difficult to stay ahead of the curve.

However, with the help of artificial intelligence and machine learning algorithms, it continues to advance, creating new treatments and helping people live longer and healthier.A study published by The Lancet Digital Health compared the performance of deep learning a form of artificial intelligence (AI) in detecting diseases from medical imaging versus that of healthcare professionals, using a sample of studies carried out between 2012 and 2019.

The study found that, in the past few years, AI has become more precise in identifying disease diagnosis in these images and has become a more feasible source of diagnostic information.With advancements in AI, deep learning may become even more efficient in identifying diagnosis in the coming years.

Moreover, it can help doctors with diagnoses and notify when patients are weakening so that the medical intervention can occur sooner before the patient needs hospitalization. It can save costs for both the hospitals and patients. Additionally, the precision of machine learning can detect diseases such as cancer quickly, thus saving lives.In 2019, the medical imaging toolsegment accounted for a larger share of the North America artificial intelligence in healthcare diagnosis market. Its growth is attributed to an increasing adoption of AI technology for diagnosis of chronic conditions is likely to drive the growth of diagnostic tool segment in the North America artificial intelligence in healthcare diagnosis.In 2019, the radiology segment held a considerable share of the for North America artificial intelligence in healthcare diagnosis market, by the application. This segment is also predicted to dominate the market by2027 owing to rising demand for AI based application for radiology.A few major primary and secondary sources for the artificial intelligence in healthcare diagnosis market included US Food and Drug Administration, and World Health Organization.Read the full report: https://www.reportlinker.com/p05974389/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Go here to see the original:

The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61...

China aims to become global AI leader by 2030 – ZDNet

China's top administrative body has laid out a three-step approach to make artificial intelligence (AI) the key driving force of the country's economic growth for the next decade.

According to the plan initiated by the State Council and released last week, China will first keep pace with other leading countries in terms of AI technology and applications by 2020, aiming for a core AI industry worth 150 billion yuan ($22 billion) and AI-related fields worth 1 trillion yuan, according to a Tencent news report.

After the conclusion of the second phase by 2025 when legal grounds for the industry are established, the government plans to be the global leader in AI theory, technology, and applications and the major AI innovation centre globally by 2030. At which time, the core AI industry will value 1 trillion yuan and AI-related industries 10 trillion yuan, according to the blueprint.

The government has also pushed for vigorous development of AI-related emerging industries in China, including intelligent hardware and software, intelligent robots, and Internet of Things based devices.

Research on brain science, brain computing, quantum information and quantum computing, intelligent manufacturing, robotics, and big data will be greatly upheld, while intelligent upgrades in manufacture, agriculture, logistics, and home appliances will also be sped up.

A PwC report released last month has estimated the global GDP will become 14 percent higher in 2030 due to the wide deployment of AI.

"China will begin to pull ahead of the US's AI productivity gains in 10 years," the report said, and estimated that China will have the most economic gains from AI, which may boost China's GDP by 26 percent by 2030.

Chinese companies Alibaba, Baidu, and Lenovo are stepping up AI investment in a range of industries such as ecommerce, IoT, and autonomous driving.

Baidu announced the acquisition of Seattle-based startup Kitt.ai and a partnership with US chipmaker Nvidia this month, while Alibaba recently revealed an AI-powered smart speaker.

Lenovo also said AI will be a key feature of its products going forward, which include a digital assistant, connected health devices, and augmented and virtual reality platforms.

See the rest here:

China aims to become global AI leader by 2030 - ZDNet

COVID-19 Is Accelerating AI in Health Care. Are Federal Agencies Ready? – Nextgov

Artificial intelligence is rapidly expanding its foothold in health care, including at many federal health agencies such as Veterans Affairs and Health and Human Services departments and the Defense Health Agency.

The ongoing coronavirus pandemic is demonstrating the power of AI-enabled capabilities for private and public sector health care organizations responsible for responding to todays health care challenges.

For example, the pandemic has catalyzed numerous AI-enabled development efforts for vaccines. After scientists decoded the genetic sequence of SARS-CoV-2the virus causing COVID-19and publicly posted the results on January 10, the race was on. Based on that data, firms began using AI-enabled methods to rapidly develop potential vaccines, some of which are already proceeding to clinical trials. By comparison, traditional non-AI drug development processes take many months, if not years, to proceed to human clinical trials.

Likewise, federal health agencies are also incorporating AI-enabled responses. The Centers for Disease Control and Prevention, for example, is hosting an AI-driven bot on its website to help screen people for coronavirus infections as a way to reduce the numbers of patients flocking to increasingly overwhelmed urgent care facilities.

Additionally, the Food and Drug Administration recently approved use of an AI-driven diagnostic for COVID-19 developed by behold.ai. The tool analyzes lung x-rays and provides radiologists with a tentative diagnosis as soon as the image is captured, reducing time and expense.

But there is an important caveat to this activity: we dont yet know whether these and other non-AI related efforts will produce the long-term impact we are all hoping for.

In a milestone report published in December titled Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril, the National Academy of Medicine noted the many ways in which AI is revolutionizing health care. However, they also warned that careful planning and implementation is required to avoid the risk of a backlashor an AI winter, as some refer to itthat can occur when hyped AI solutions fail to deliver expected performance or benefits.

Federal and defense health care agencies will be expected to mobilize more quickly and ensure that AI solutions produce results. So how can federal health care agencies improve their odds of success? How can they implement and scale AI projects and, more importantly, try to realize AIs vast potential to improve healthcare while lowering costs?

Based on successful public and private sector AI implementation, federal and defense agencies can achieve greater success in their AI deployments if they:

Have a strategic plan for AI. Select the purpose and focus of initial efforts with care and clearly define business challenges warranting AI adoption. That means, in part, identifying use cases that provide a significant return on investment. Moreover, the plan should also provide a means to address the agencys readiness to leverage AI as there are several dimensions to readiness. For example, technology readiness refers to having the needed tools, technical infrastructure and data management strategies and capabilities in place. Workforce readiness refers to having needed talent recruitment and development, training, incentives, communications and change management structures and programs in place to successfully launch and sustain AI.

Understand your requirements and then phase solutions, from simpler to more complex. Amid the variety of AI solution typessuch as task automation, pattern recognition or contextual reasoningorganizations will need to investigate the requirements of different user groups or use cases, technical and analytic complexities, and the ability to scale and sustain solutions across the enterprise. For example, robotic process automation is a relatively easy AI solution to implement to complete time-consuming, repetitive tasks such as data entry, data capture and data transferal from one source to another source. RPA can then serve as an easy gateway for the organization to tackle more advanced automation leveraging AI.

Use an agile approach and develop iteratively. Such an approach can strengthen efforts to engage users and build trust, and there has to be a level of risk tolerance for this approach to work. Agile methodology can be helpful for facilitating collaboration and adoption. Central to this approach is adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change, which inherently allows for a fail fast element to quickly identify success or failure.

AI is a human endeavor. People must bring needed leadership, accountability, motivation and expertise to the project both before and after it becomes operational and, later, as it scales. Having humans in the loop ensures better integration into work processes, builds trust, and creates accountabilities for the performance of AI solutions.

There are many factors that contribute to a projects success, but these considerations can be key as agencies strive to harness AI more fully in support of their missions.

Philip Dietz, MBA, is a principal at Booz Allen Hamilton leading data science and analytics.

Read more from the original source:

COVID-19 Is Accelerating AI in Health Care. Are Federal Agencies Ready? - Nextgov

AI allows paralyzed person to ‘handwrite’ with his mind – Science Magazine

By Kelly ServickOct. 23, 2019 , 12:05 PM

CHICAGO, ILLINOISBy harnessing the power of imagination, researchers have nearly doubled the speed at which completely paralyzed patients may be able to communicate with the outside world.

People who are locked infully paralyzed by stroke or neurological diseasehave trouble trying to communicate even a single sentence. Electrodes implanted in a part of the brain involved in motion have allowed some paralyzed patients to move a cursor and select onscreen letters with their thoughts. Users have typed up to 39 characters per minute, but thats still about three times slower than natural handwriting.

In the new experiments, a volunteer paralyzed from the neck down instead imagined moving hisarm to write each letter of the alphabet. That brain activity helped train a computer model known as a neural network to interpret the commands, tracing the intended trajectory of his imagined pen tip to create letters (above).

Eventually, the computer could read out the volunteers imagined sentences with roughly 95% accuracy at a speed of about 66 characters per minute, the team reported here this week at the annual meeting of the Society for Neuroscience.

The researchers expect the speed to increase with more practice. As they refine the technology, they will also use their neural recordings to better understand how the brain plans and orchestrates fine motor movements.

See the original post here:

AI allows paralyzed person to 'handwrite' with his mind - Science Magazine

Google is using AI to design chips that will accelerate AI – MIT Technology Review

A new reinforcement-learning algorithm has learned to optimize the placement of components on a computer chip to make it more efficient and less power-hungry.

3D Tetris: Chip placement, also known as chip floor planning, is a complex three-dimensional design problem. It requires the careful configuration of hundreds, sometimes thousands, of components across multiple layers in a constrained area. Traditionally, engineers will manually design configurations that minimize the amount of wire used between components as a proxy for efficiency. They then use electronic design automation software to simulate and verify their performance, which can take up to 30 hours for a single floor plan.

Time lag: Because of the time investment put into each chip design, chips are traditionally supposed to last between two and five years. But as machine-learning algorithms have rapidly advanced, the need for new chip architectures has also accelerated. In recent years, several algorithms for optimizing chip floor planning have sought to speed up the design process, but theyve been limited in their ability to optimize across multiple goals, including the chips power draw, computational performance, and area.

Intelligent design: In response to these challenges, Google researchers Anna Goldie and Azalia Mirhoseini took a new approach: reinforcement learning. Reinforcement-learning algorithms use positive and negative feedback to learn complicated tasks. So the researchers designed whats known as a reward function to punish and reward the algorithm according to the performance of its designs. The algorithm then produced tens to hundreds of thousands of new designs, each within a fraction of a second, and evaluated them using the reward function. Over time, it converged on a final strategy for placing chip components in an optimal way.

Validation: After checking the designs with the electronic design automation software, the researchers found that many of the algorithms floor plans performed better than those designed by human engineers. It also taught its human counterparts some new tricks, the researchers said.

Production line: Throughout the field's history, progress in AI has been tightly interlinked with progress in chip design. The hope is this algorithm will speed up the chip design process and lead to a new generation of improved architectures, in turn accelerating AI advancement.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

View post:

Google is using AI to design chips that will accelerate AI - MIT Technology Review

GNS Healthcare Presents Novel Use of AI to Identify Drivers of Response to Immune Checkpoint Inhibitor Therapy – PRNewswire

CAMBRIDGE, Mass., July 21, 2020 /PRNewswire/ --GNS Healthcare(GNS), a leading AI and simulation company, presents results that validate the use of AI to accurately classify tumors based on their immunogenicity and predict response to immune checkpoint inhibitor (ICI) therapy using real-world data. The study showcases the power of causal AI to capture biomarkers and mechanisms, in addition to PD(L)1 and tumor mutation burden (TMB), that are consistent with known immunology. These markers, including CXCL13 upregulation and STK11 mutation, are in line with the targets that are currently being explored for stratification of responders vs. non-responders to ICI therapy, cohort selection, enrichment of future immunoncology trials, or ICI efficacy improvement through combination therapy.

The study applied AI to tumor data from The Cancer Genome Atlas (TCGA) to identify the drivers of immune response. The data from nearly 700 NSCLC and over 400 HNSCC patients were fed into REFS, GNS's causal AI and simulation platform, which reverse-engineered in silico patients which accurately classified tumors based on their response. Macrophage activation and polarization, which is driven in part by metabolic reprogramming, was identified as the primary driver of tumor immunogenicity which can allow for a more targeted approach to patient care and clinical trial design.

"Over the past decade we have seen nearly a dozen immuno-oncology treatments approved but treatment protocols are still based only on a few biomarkers. The presentation of our work is not only a validation of how AI can extract critical insights from real-world data, but also a milestone in our mission to make precision oncology a reality," said Colin Hill, GNS Healthcare CEO and Co-Founder.

These findings from these in silico patients can be used by biopharma companies to select optimal patient populations for clinical trials based on likelihood of response and discover novel biomarkers that make tumors more susceptible to immune therapy, irrespective of response to PD(L)1 therapy. The findings are also beginning to unlock the value of investments in real-world and clinical data to inform future trial design, enable discovery of novel drug targets, and better position drugs across global markets.

Listen to a deep-dive webinar discussing the results here or view the poster presented at ASCO-SITC and reach out to the GNS Healthcare team to learn more.

About GNS Healthcare: GNS Healthcare is an AI-driven precision medicine company developingin silicopatients from real-world and clinical data. In silico patients reveal the complex system of interactions underlying disease progression and drug response, enabling the simulation of drug response at the individual patient level. This in turn enables the ability to precisely match therapeutics to patients and rapidly discover key insights across drug discovery, clinical development, commercialization, and payer markets. GNS REFS causal AI and simulation technology integrates and transforms a wide-variety of patient data types into in silico patients across oncology, auto-immune diseases, neurology, and cardio-metabolic diseases. GNS partners with the world's leading biopharmaceutical companies and health plans and has validated its science and technology in over 50 peer-reviewed papers and abstracts. https://gnshealthcare.com

Media Contact:Simona GilmanMarketing[emailprotected]

SOURCE GNS Healthcare

http://gnshealthcare.com

See original here:

GNS Healthcare Presents Novel Use of AI to Identify Drivers of Response to Immune Checkpoint Inhibitor Therapy - PRNewswire

The future of EHRs: Google AI head on tossing out the keyboard + innovating data search – Becker’s Hospital Review

While clinicians have often expressed frustration over the way they have to interact with EHRs, Google is working on technology for streamlining functions like data searches and predictive text search, according to Google artificial intelligence head Jeff Dean, PhD.

During a recent episode of a podcast by Eric Topol, MD, and Abraham Verghese, MD, "Medicine and the Machine," Dr. Dean discussed his predictions for how EHRs will evolve in healthcare and some of Google's current projects.

Here are six insights from Dr. Dean, cited in an Aug. 20 Medscape report.

1. Google has worked with other organizations on using deidentified data to refine EHR searches in a way similar to how the tech company trains natural language models, Dr. Dean said. With the natural language models, the researchers aim to use the prefix of a piece of text to predict the next word or sequence of words that is going to occur.

2. An example of natural language models would be a model applied to email messages, so when a person is typing out a message, the AI suggests how they might complete the sentence to save typing, Dr. Dean said.

3. Google is working with the same approach to give clinicians suggestions about what might occur next in the EHR for a particular patient, Dr. Dean said, adding, "If you think about the medical record as a whole sequence of events, and if you have de-identified medical records, you can take a prefix of a medical record and try to predict either the individual events or maybe some high-level attributes about subsequent events, like, 'Will this patient develop diabetes within the next 12 months?'"

4. While the idea of creating an AI model that uses every past medical decision to help inform all future medical decisions is complicated, Dr. Dean said the feat is a "good north star" for potential health IT innovations.

5. Dr. Dean said his group has done some work using an audio recording of a patient-physician conversation to develop a medical note that a clinician can just then edit a little bit instead of having to type up the entire note.

6. Creating summarized notes from conversations might also be a good assistant tool that not only helps reduce clinician burden but could lead to higher-quality data in the EHR, according to Dr. Dean.

"We all know that often clinicians copy and paste the most recent note and don't really edit it appropriately. That's partly because it's very cumbersome and unwieldy to interact with some of these systems, and speech and voice are a more natural way of creating notes," Dr. Dean said.

See the original post here:

The future of EHRs: Google AI head on tossing out the keyboard + innovating data search - Becker's Hospital Review

AI’s Factions Get Feisty. But Really, They’re All on the Same Team – WIRED

Slide: 1 / of 1. Caption: Getty Images

Artificial intelligence is not one thing, but many, spanning several schools of thought. In his book The Master Algorithm, Pedro Domingos calls them the tribes of AI.

As the University of Washington computer scientist explains, each tribe fashions what would seem to be very different technology. Evolutionists, for example, believe they can build AI by recreating natural selection in the digital realm. Symbolists spend their time coding specific knowledge into machines, one rule at a time.

Right now, the connectionists get all the press. They nurtured the rise of deep neural networks, the pattern recognition systems reinventing the likes of Google, Facebook, and Microsoft. But whatever the press says, the other tribes will play their own role in the rise of AI.

Take Ben Vigoda, the CEO and founder of Gamalon. Hesa Bayesian, part of the tribe that believes in creating AI through the scientific method. Rather than building neural networks that analyze data and reach conclusions on their own, he and his team useprobabilistic programming, a technique in which they start with their own hypotheses and then use data to refine them. His startup, backed by Darpa, emerged from stealth mode this morning.

Gamalons tech can translate from one language to another, and the company isdevelopingtools that businesses can use to extract meaning from raw streams of text. Vigoda claims his particular breed of probabilistic programming can produce AI that learns more quickly than neural networks, using much smaller amounts of data. You can be very careful about what you teach it, he says, and can edit what youve taught it.

As others point out, an approach along these lines is essential to the rise of machines capable of truly thinking like humans. Neural networks require enormous amounts of carefully labelled data, and this isnt always available. Vigoda even goes so far as to say that his techniques will replace neural networks completely, in all applications. That is very, very clear, he says.

But just as deep learning isnt the only way to artificial intelligence, neither is probabilistic programming. Or Gaussian processes. Or evolutionary computation. Or reinforcement learning.

Sometimes, the AI tribesbadmouth each other. Sometimes, they play up their technology at the expense of the others. But the reality is that AI will risefrom many technologies working together. Despite the competition, everyone is working toward the same goal.

Probabilistic programming lets researchers build machine learning algorithms more like coders build computer programs. But the real power of the technique lies inits ability to deal with uncertainty. This can allow AI to learn from less data, but it can also helpresearchers understand why an AI reaches particular decisionsand more easily tweak the AI if they dont agree with those decisions. True AI will need all that, whether it powers a chatbot trying to carry on a human-like conversation or an autonomous car trying to avoid an accident.

But neural networks have proven their worth with, among other things, image and speech recognition, and theyre not necessarily in competition with techniques like probabilistic programming. In fact, Google researchers are building systems that combine the two. Their strengths complement one another. Deep neural networks and probabilistic models are closely related, says David Blei, a Columbia University computer scientist and an advisor to Gamalon who has worked with Google research on these types of mixed models. Theres a lot of probabilistic modeling happening inside neural networks.

Inevitably, the best AI will combine several technologies. Take AlphaGo, the breakthrough system built by Googles DeepMind lab. It combined neural networks with reinforcement learning and other techniques. Blei, for one, doesnt see a world oftribes. It doesnt exist for me, he says. He sees a world in which everyone is reaching for the same master algorithm.

Here is the original post:

AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED

Even an AI machine couldn’t ace China’s super tough college entrance exam – Mashable


Mashable
Even an AI machine couldn't ace China's super tough college entrance exam
Mashable
An AI machine that sat the math paper for China's college entrance exam has failed to prove it's better than its human competition. AI-Maths, a machine made of 11 servers, three years in the making, joined almost 10 million high schoolers last week, in ...
AI robot needs to understand more Chinese to boost math scoreecns

all 4 news articles »

Original post:

Even an AI machine couldn't ace China's super tough college entrance exam - Mashable