Page 99«..1020..9899100101..110120..»

Category Archives: Ai

The wellness industry’s risky embrace of AI-driven mental health care – Brookings Institution

Posted: October 15, 2021 at 9:05 pm

If you need to treat anxiety in the future, odds are the treatment wont just be therapy, but also an algorithm.Across the mental-health industry, companies are rapidly building solutions for monitoring and treating mental-health issues that rely on just a phone or a wearable device. To do so, companies are relying on affective computing to detect and interpret human emotions. Its a field thats forecast to become a $37 billion industry by 2026, and as the COVID-19 pandemic has increasingly forced life online, affective computing has emerged as an attractive tool for governments and corporations to address an ongoing mental health crisis.

Despite a rush to build applications using it, emotionally intelligent computing remains in its infancy and is being introduced in the realm of therapeutic services as a fix-all solution without scientific validation nor public consent. Scientists still disagree over the over the nature of emotions and how they are felt and expressed among various populations, yet this uncertainty has been mostly disregarded by a wellness industry eager to profit on the digitalization of health care. If left unregulated, AI-based mental-health solutions risk creating new disparities in the provision of care as those who cannot afford in-person therapy will be referred to bot-powered therapists of uncertain quality.

The field of affective computing, also more commonly referred to as emotion AI, is a subfield of computer science originating in the 1990s. Rosalind Picard, widely credited as one of its pioneers, defined affective computing as computing that relates to, arises from, or deliberately influences emotions. It involves the creation of technology that is said to recognize, express, and adapt to human emotions. Affective computer scientists rely on sensors, voice and sentiment analysis programs, computer vision, and ML techniques to capture and analyze physical cues, written text, and/or physiological signals. These tools are then used to detect emotional changes.

Start-ups and corporations are now working to apply this field of computer science to build technology that can predict and model human emotions for clinical therapies. Facial expressions, speech, gait, heartbeats, and even eye blinks are becoming profitable sources of data. Companion Mx, for example, is a phone application that analyses users voices to detect signs of anxiety. San-Francisco-based Sentio Solutions is combining physiological signals and automated interventions to help consumers manage their stress and anxiety. A sensory wristband monitors your sweat, skin temperature and blood flow, and, through a connected app, asks users to select how they are feeling from a series of labels, such as distressed or content. Additional examples include the Muse EEG-powered headband, which guides users toward mindful meditation by providing live feedback on brain activity, and the Apollo Neuro ankle band, which monitors users heart rate variability to emit vibrations that provide stress relief.

While wearable technologies remain costly for the average consumer, therapy can now come in the form of a free 30-second download. App-based conversational agents, such as Woebot, are using emotion artificial intelligence to replicate the principles of cognitive behavioral therapy, a common method to treat depression, and to deliver advice regarding sleep, worry, and stress. Sentiment analysis used in chatbots combines sophisticated natural language processing (NLP) and machine learning techniques to determine the emotion expressed by the user. Ellie, a virtual avatar therapist developed by the University of Southern California, can pick up on nonverbal cues and guide the conversation accordingly, such as by displaying an affirmative nod or a well-placed hmmm. Though Ellie is not currently available to the wider public, it provides a hint of the future of virtual therapists.

In order to operate, artificial intelligence systems require a simplification of psychological models and neurobiological theories on the functions of emotions. Emotion AI cannot capture the diversity of human emotional experience and is often embedded with the programmers own cultural bias. Voice inflections or gestures vary from one population to another, and affective computer systems are likely to struggle to capture a diversity of human emotional experience. As the researchers Ruth Aylett and Ana Paiva write, affective computing demands that qualitative relationships must be quantified, a definite selection made from competing alternatives, and internal structures must be mapped onto software entities. When qualitative emotions are coded into digital systems, developers use models of emotions that rest on shaky parameters. Emotions are no hard science, and the metrics produced by such software are at best an educated guess. Yet few developers are transparent about the serious limitations of their systems.

Emotional expressions manifested through physical changes also have overlapping parameters. Single biological measures such as heart rate and skin conductance are not infallible indicators of emotional changes. A spiked heart rate may be the result of excitement, fear, or simply drinking a cup of coffee. There is still no consensus within the scientific community about physiological signal combinations that are the most relevant to emotion changes, as emotional experiences are highly individualized. The effectiveness of affective computing systems is seriously impeded by their limited reliability, lack of specificity, and restricted generalizability.

The questionable psychological science behind some of these technologies is at times reminiscent of pseudo-sciences, such as physiognomy, which were rife with eugenicist and racist beliefs. In Affective Computing, the 1997 book credited with outlining the framework for affective computing, Picard observed that emotional or not, computers are not purely objective. This lack of objectivity has complicated efforts to build affective computing systems without racial bias. Research by the scholar Lauren Rhue revealed that two top emotion AI systems assigned professional black basketball players more negative emotional scores than their white counterparts. After accusations of racial bias, recruitment company HireVue stopped using facial expressions to deduce an applicants emotional states and employability. Given the obvious risks for discrimination, AI Now called in 2019 for a ban on the use of affect-detecting technologies in decisions that can impact peoples lives and access to information.

The COVID-19 pandemic exacerbated the need to improve already limited access to mental-health services amid reports of staggering increases in mental illnesses. In June 2020, the U.S. Census Bureau reported that adults were three times more likely to screen positive for depressive and/or anxiety disorders compared to statistics collected in 2019. Similar findings were reported by the Centers for Disease Control and Prevention, with 11% of respondents admitting to suicidal ideation in the 30 days prior to completing a survey in June 2020. Adverse mental health conditions disproportionately affected young adults, Hispanic persons, Black persons, essential workers, and people who were receiving treatment for pre-existing psychiatric conditions. During this mental-health crisis, Mental Health America estimated that 60% of individuals suffering from a mental illness went untreated in 2020.

To address this crisis, government officials loosened regulatory oversight of digital therapeutic solutions. In what was described as a bid to serve patients and protect healthcare workers, the FDA announced in April 2020 it would expedite approval processes for digital solutions that provide services to individuals suffering from depression, anxiety, obsessive-compulsive disorder, and insomnia. The change in regulation was said to provide flexibility for software developers designing devices for psychiatric disorders and general wellness, without requiring developers to state the different AI-ML-based techniques that power their systems. Consumers would therefore be unable to know whether, for example, their insomnia app was using sentiment analysis to track and monitor their moods.

By failing to provide instructions regarding the collection and management of emotion and mental health-sensitive data, the announcement demonstrated the FDAs neglect of patient privacy and data security. Whereas traditional medical devices require testing, validation and recertification after software changes that could impact safety, digital devices tend to receive a light touch by the FDA. As noted by Bauer et al., very few medical apps and wearables are subject to FDA review, as the majority are classified as minimal risk and outside of the agencys enforcement. For example, under current regulation, mental health apps that are designed to assist users in self-managing their symptoms, but do not explicitly diagnose, are seen as posing minimal risk to consumers.

The growth of affective computing therapeutics is occurring simultaneously with the digitization of public-health interventions and the collection of data in self-tracking devices. Over the course of the pandemic, governments, and private companies pumped funding into the rapid development of remote sensors, phone apps, and AI for quarantine enforcement, contact tracing, and health-status screening. Through the popularization of self-tracking applicationsmany of which are already integrated into our personal deviceswe have become accustomed to passive monitoring in our data-fied lives. We are nudged by our devices to record sleep, exercise, and eat to maximize physical and mental wellbeing. Tracking our emotions is a natural next step in the digital evolution of our livesFitbit, for example, has now added stress management to its devices. Yet few of us know where this data goes or what is done with it.

Digital products that rely on emotion AI attempt to solve the affordability and availability crisis of mental-health care. The cost of conventional face-to-face therapy remains high, ranging between $65 to $250 an hour for those without insurance based on the therapist directory GoodTherapy.org. According to the National Alliance on Mental Illness, nearly half of the 60 million individuals living with mental health conditions in the United States do not have access to treatment. Unlike a therapist, tech platforms are indefatigable and available to users 24/7.

People are turning to digital solutions at increasing rates to address mental-health issues. First-time downloads of the top 10 mental wellness apps in the United States reached 4 million in April 2020, a 29% increase since January. In 2020, the Organisation for the Review of Care and Health Apps found a 437% increase in searches for relaxation apps, 422% for OCD, and 2483% in mindfulness apps. Evidence of their popularity beyond the pandemic is also reflected in the growing number of corporations offering digital mental-health tools to their employees. Research by McKinsey concludes that such tools can be used by corporations to reduce productivity losses due to employee burn out.

Rather than addressing the lack of mental-health resources, digital solutions may be creating new disparities in the provision of services. Digital devices that are said to help with emotion regulation such as the MUSE headband and the Apollo Neuro band cost $250 and $349, respectively. Individuals are thus encouraged to seek self-treatment through cheaper guided mediation and/or conversational bot-based applications. Even among smart-phone based services, many are hidden behind pay-walls and hefty subscription fees to access full content.

Disparities in health-care outcomes may be exacerbated by persistent questions about whether digital mental healthcare can live up to its analog forerunner. Artificial intelligence is not sophisticated enough to replicate spontaneous, natural conversations of talk therapy, and cognitive behavioral therapy involves the recollection of detailed personal information and engrained beliefs since childhooddata points that cannot be acquired through sensors. Psychology is part science and part trained intuition. As Dr. Adam Miner, a clinical psychologist at Stanford, argues, an AI system may capture a persons voice and movement, which is likely related to a diagnosis like major depressive disorder. But without more context and judgement, crucial information can be left out.

Most importantly, these technologies can operate without clinician oversight or other forms of human support. For many psychologists, the essential ingredient in effective therapies is the therapeutic alliance between the practitioner and the patient, but devices are not required to abide by clinical safety protocols that record the occurrence of adverse events. A survey of 69 apps for depression published in BMC Medicine found that only 7% included more than three suicide prevention strategies. Six of the apps examined failed to provide accurate information on suicide hotlines. Apps supplying incorrect information were reportedly downloaded more than 2 million times through Google Play and the App Store.

As these technologies are being developed, there are no policies in place that dictate who has the right to our emotion data and what constitutes breaches of privacy. Inferences made by emotion recognition systems can reveal sensitive health information that poses risks to consumers. Depression detection by workplace software monitoring or wearables may cost individuals their sources of employment or lead to higher insurance premiums. BetterHelp and Talkspace, two counseling apps that connect users to licensed therapists, were found to disclose sensitive information with third parties about users mental health history, sexual orientation, and suicidal thoughts.

Emotion AI systems fuel the wellness economy, in which the treatment of mental-health and behavioral issues are becoming a profitable business venture, despite a large portion of developers having no prior certification in therapeutic or counseling services. According to an estimate by the American Psychological Association, there are currently more than 20,000 mental-health apps available to mobile users. One study revealed that only 2.08% of psychosocial and wellness mobile apps are backed by published, peer-reviewed evidence of efficacy.

Digital wellness tools tend to have high drop-out rates, as only a small segment of users regularly follow treatment on the apps. An Arean et al. study on self-guided mobile apps for depression found that 74% of registered participants ceased using the apps. These high attrition rates have stalled investigations into their long-term effectiveness and the consequences of mental health self-treatment through digital tools. As with other AI-related issues, non-White populations, who are underserved in psychological care, continue to be underrepresented in the data used to research, develop, and deploy these tools.

These findings do not negate the ability of affective computing to provide promising medical and other healthcare developments. Affective computing has led to advances such as detecting spikes in heart rate in patients suffering from chronic pain, facial analysis to detect stroke, and speech analysis to detect Parkinsons.

Yet in the United States there remains no widely coordinated effort to regulate and evaluate digital mental-health resources and products that rely on affective computing techniques. Digital products marketed as therapies are being deployed without adequate consideration of patients access to technical resources and monitoring of vulnerable users. Few products provide specific guidance on their safety and privacy policies and whether data collected is shared with third parties. By being labelled as wellness products, companies are not subject to the Health Insurance Portability and Accountability Act. In response, non-profit initiatives, such as the Psyberguide, have sought to rate apps by the credibility of their scientific protocols and transparency in privacy policies. But these initiatives are severely limitedand not a stand-in for government.

Beyond the limited proven effectiveness of these digital services, we must take a step back and evaluate how such technology risks deepening divides in the provision of care to already underserved populations. There are significant disparities in the United States when it comes to technological access and digital literacy. This limits the potential for users to make informed health choices and to consent to the use of their sensitive data. As digital solutions are cheap, scalable, and cost-efficient, segments of the population may have to rely on a substandard tier of service to address their mental health issues. Such trends also risk placing the responsibility for mental-health care on users rather than healthcare providers.

Mental-health technologies that rely on affective computing are jumping ahead of the science. Even emotion AI researchers are denouncing overblown claims made by companies and unsupported by scientific consensus. We do not have the sophistication of technology nor the confidence of science to guarantee the effectiveness of such digital solutions in addressing the mental health crisis. And at the very least, governmental regulation should push companies to be transparent about that.

Alexandrine Royer is a doctoral candidate studying the digital economy at the University of Cambridge, and a student fellow at the Leverhulme Centre for the Future of Intelligence.

Original post:

The wellness industry's risky embrace of AI-driven mental health care - Brookings Institution

Posted in Ai | Comments Off on The wellness industry’s risky embrace of AI-driven mental health care – Brookings Institution

AI will improve healthcare, but doctors and patients need legal safety net – Baylor College of Medicine News

Posted: at 9:05 pm

Bhattad and Jain argue that artificial intelligence (AI) is the driving force of the latest technological developments in medical diagnosis with a revolutionary impact in a recent paper. But what happens when an AI produces a wrong breast cancer diagnosis (perhaps based on a bias), the physician accordingly fails to assign the right treatment, and the patient suffers metastasized cancer?

AI technologies employ machine learning to learn from new data by identifying complex, latent (hidden) patterns in datasets. These systems are increasingly being used to assist in patient healthcare, be it by predicting outcomes or identifying pathology. However, this complexity can reach a point where neither the developers nor the operators understand the logic behind the production of the output. These black-box systems ingest data and output results without revealing their processes for doing so.

This opaqueness is the first impediment to claiming compensation when problems ensue. Not being able to backwards engineer the AI decision makes it difficult for potential plaintiffs to identify the defect and figure out where the fault originated. Moreover, producers have two defenses at their disposal which would preclude liability (development risks and complying with regulations).

A plaintiff might then attempt to claim under medical malpractice (suing the physician). The degree of explainability (i.e. human interpretability) varies from one system to another and could be a central factor in delineating physician liability.

However, reliance on information whose provenance is unexplainable should not by itself be considered unethical or negligent. Durn and Jongsma argue that in the absence of transparency and explainability, trust can be satisfactorily founded on the epistemological understanding that the AI system will produce the right output most of the time the idea of computational reliabilism. Accordingly, a system audit that demonstrates consistent accuracy could counteract the patients claim of physician negligence.

A second factor that may impact liability is the degree of reliance demonstrated by the physician. In a hypothetical scenario where a physician trusts an erroneous AI decision which goes against the consensus of doctors, Tobia et al. have shown that jurors are more likely than not to consider that physician reasonable (not negligent).

This is important because it demonstrates that except for using the AI incorrectly, the physicians reliance on the AI decision is safe from liability, even when that output goes against the established medical consensus and is ultimately proven wrong. A patient-plaintiff is therefore left in a very awkward position: harm has been caused but neither physician nor producer can (or should) be blamed.

I propose that the way out of this legal black hole is not a strict liability model (which would automatically blame the physician for her reliance) but the creation of a federal fund, which would compensate damage produced by a medical devices AI component in scenarios where neither the producer nor the physician can predictably be held liable.

A strong policy basis already exists: Executive Order 13859 makes it clear that the governments policy is to sustain and enhance theleadership position of the U.S. in AI.

Transferring this nascent technologys cost of liability to physicians and producers through insurance premiums would counteract these policy objectives. Instead, the cost should be budgeted within the governments ambitious spending plan and patients should be provided with a safety net if public trust is to be maintained.

The potential for AI to revolutionize healthcare seems inestimable. As change shakes the foundations of centuries-old legal ideas, new legal solutions must keep pace.

-By Alex Tsalidis, summer intern in the Center for Medical Ethics and Health Policy at Baylor College of Medicine and a law student at the University of Cambridge

See more here:

AI will improve healthcare, but doctors and patients need legal safety net - Baylor College of Medicine News

Posted in Ai | Comments Off on AI will improve healthcare, but doctors and patients need legal safety net – Baylor College of Medicine News

China Isn’t the AI Juggernaut the West Fears – Bloomberg

Posted: at 9:05 pm

The opening scene of abrief onlinedocumentary by Chinese state-run media channelCGTN shows jaywalkers in Shenzhen getting captured on video, identifiedand then shamed publicly in real time. The report is supposed to highlight the countrysprowess in artificial intelligence, yet itreveals a lesser-known truth: Chinas AI isnt so mucha tool of world domination as a narrowly deployed means of domestic control.

On paper, the U.S. and Chinaappear neck and neck in artificial intelligence.China leads in the share of journal citationshelped by the fact that it also publishes morewhile the U.S. is far ahead in the more qualitative metric of cited conference papers, according to a recent report compiled by Stanford University.Sowhile the worlds most populous country is an AI superpower, investors and China watchers shouldnt put too much stock in the notion that its position is unassailable or that the U.S. is weaker. Bymiscalculating the others abilities, both superpowers risk overestimating their adversarys strengths and overcompensating in a way that could lead toa Cold War-style AI arms race.

Read more:

China Isn't the AI Juggernaut the West Fears - Bloomberg

Posted in Ai | Comments Off on China Isn’t the AI Juggernaut the West Fears – Bloomberg

AI is the Future And It’s Time We Start Embracing It – Datamation

Posted: October 5, 2021 at 4:25 am

Par Chadha is the founder, CEO, and CIO of Santa Monica, California-based HGM Fund, a family office. Chadha also serves as chairman of Irving, Texas-based Exela Technologies, a business process automation (BPA) company, and is the co-founder of Santa Monica, California-based Rule 14, a data-mining platform. He holds and manages investments in the evolving financial technology, health technology, and communications industries.

Intelligence evolution is nothing new. These days, its just taking on a more electronic form. Some innovations seem to appear overnight, while others take decades to perfect. When it comes to the topic of artificial intelligence (AI), most people are probably content to take it slow, as the possibilities are exciting but admittedly a bit scary at times.

However, AI is going to continue to do worlds of good. Peaceful deployment of this technology saves lives, eases burdens, and takes on tasks that can be dangerous for humans not to mention, it brings loads of added convenience. Doesnt everyone want a robot to bring them a beer during the game, so they dont have to miss the action? We already know the value of Siri and Alexa to enable us to keep our lives organized, hands-free.

Star Trek first introduced us to the idea that a robot could be capable of performing a medical exam before the doctor comes in to see you. Robot-assisted surgery has already arrived and appears to be here to stay, making some procedures less invasive and less prone to error.

Robots that perform repetitive functions will continue to advance and become more regular fixtures in our work and home lives. Tasks like shopping, cleaning, and mail delivery are already becoming automated.

Theres no question that AI is powerful. And when its used for good, its a beautiful tool. Unfortunately, its very difficult to keep powerful things out of the hands of the bad guys. So some of these incredible tools, like exoskeletons for soldiers, will also make more formidable enemies.

The discovery of DNA a century ago was transformative to our understanding of human biology. It took us a hundred years to get to the point where we could edit DNA, but whats next? CRISPR has the potential to provide healing to millions of people, but the possibilities of DNA editing are about as vast as your imagination can go. Attack of the Clones no longer seems so far off.

The fears people experience about AI are significant: What if I lose my job? My livelihood? Is there a place for me in this future? AI is even beginning to break the order in some families, because the people of the younger generation working in knowledge-based jobs are already making more money than their parents did. So how do we adapt to and embrace this exciting yet possibly frightening future?

See more: Artificial Intelligence: Current and Future Trends

We have to stay flexible. With reskilling, all of us should be increasingly confident that AI may change our jobs but wont render us unemployable. I have had to reinvent myself each decade since 1977 sometimes more than once. But Ive always found success, despite the challenges this brings, and the process has always been fulfilling.

Start with what is least offensive and difficult to acclimate to as youre making peace with the future. Rather than feeling overwhelmed by all the change, try creating smaller and more manageable goals when it comes to your technology adoption. Enlist the help of a younger person who may have an easier time adapting to these changes.

We will likely lose the satisfaction we get from mowing our own lawn and many other tasks in the near future. We will have to find peace, fulfillment, pride, and happiness through other activities. This isnt something to mourn. Its something to get creative about. Consider the possibilities rather than dwelling on fear of the future.

Time is not likely to begin marching in the opposite direction, and technology doesnt often work backward. We can choose to live in fear, or we can choose to embrace the future, counting our blessings for how these innovations will improve our lives and expand our horizons.

The worrisome aspect of AI is that if we can conceptualize it, we are likely to attempt it. We will need to continue to engage in conversations of ethics to ensure we stay focused on the right things: those that protect, aid, and bring value to human life.

Technology will only continue to evolve, and AI will be a part of everyones daily lives even more so than it is now. The change is inevitable. However, as with all change, we must be prepared to adapt to it. While we need to be cautious of how we use AI, the fact is that its a blessing, not a curse. Adapting to AI will be a lot less painful if we embrace it, ease into the new world it will bring, and understand that this technology will open more doors for humanity than it will close.

See more: Top Performing Artificial Intelligence Companies

Here is the original post:

AI is the Future And It's Time We Start Embracing It - Datamation

Posted in Ai | Comments Off on AI is the Future And It’s Time We Start Embracing It – Datamation

Quantum AI: Are We Ready? – Datamation

Posted: at 4:25 am

Miles Tayler, senior fellow at the R Street Institute, recently moderated a fascinating panel on quantum computing.

Several others were with him on the panel: Chris Fall, Ph.D., senior adviser, presidents office, Center for Strategic and International Studies; Scott Friedman, senior policy adviser, House Homeland Security Committee; Allison Schwartz, global government relations and public affairs leader, D-Wave Systems; and Kate Weber, Ph.D., policy lead for quantum, robotics, and fundamental research, Google.

They spoke about the practical applications of quantum computing, how the U.S. was falling behind several companies on this technology, and why that could be a terrible thing.

Lets talk about quantum computing and how it will be a game-changer for everything from simulation and modeling to artificial intelligence (AI). Im pretty sure we arent ready for a quantum AI:

While we are still at least a decade from when we get to a point when quantum computers can even begin to reach their full potential, quantum emulators and current generation quantum computers are beginning to do some real work.

We are learning that quantum computers are naturally better when emulating and interacting with reality, because nature doesnt consist of just 1s and Os. The more complex a problem, the better able a quantum computer can deal with it.

Initial applications are increasingly focused on logistic types of problems with many moving parts. For instance, it is being used heavily by foreign governments that use it for logistics management, such as Australias military using autonomous vehicles that dont put soldiers or contractors at risk. Current generation computers were not powerful enough, and since the military lives or dies on logistics, having a far more effective tool using quantum computing could.

Australia would have an increasing advantage over time if it came to open conflict due to a quantum tool to aid in logistics. In addition, applied to emerging intelligent weapons, like drone swarms, quantum computing should position the drones collectively for maximum impact more effectively. Together, these two implementations would give a military unit with this capability a significant advantage over those that do not.

Other examples are emergency response modeling and execution. The quantum computer is first asked to create a plan for a major disaster. It then assists in getting the available resources to those that most need them and mapping out and dynamically changing the plan for evacuations depending on then-current conditions.

See more: IBM Partnering with University of Tokyo on Quantum Computer

But things get interesting when we add quantum computing capability to an AI. Quantum computing can provide AIs with the ability to emulate emotions and act like they are feeling them. While this alone wouldnt represent sentience, it would be hard to tell the difference. And a quantum AI would be better able to respond to complex signals, like expressions, eye movement, and body language that traditional computers find challenging.

An AI with a quantum back end could perform the role of therapist, say on a submarine or space exploration vessel that couldnt justify a human therapist. And it would be far less likely to be biased, assuming it was properly trained.

A quantum AI would have a significant advantage as a 100% audit function. It would look at every transaction and see whether it was likely fraudulent or in violation of policy from the related metadata. Current human-driven audit organizations dont have the bandwidth for 100% audits and miss a lot of actual crimes because they have to operate from far smaller samples.

Another area where quantum AI would make a considerable difference is government, as it could almost immediately identify graft and bribery. The relevant AI would be able to distribute the limited day-to-day resources daily more effectively, especially during a catastrophe, and assess liability for complex decisions far more accurately.

See more: IBM and the Promise of Quantum Hybrid Deep Learning AI

While quantum computing is still far from its potential, with only a tiny fraction of the number of qubits needed to demonstrate that potential, it is already showing viability in several areas.

Those areas are military, such as in logistics and weapons, smart cities, government, emergency response, modeling, and simulation, where the complex problems are beyond conventional computers capabilities.

These capabilities will be a competitive game-changer for the armed forces, governments, and companies that effectively use this technology first. There will be such a significant advantage in any highly complex market, like stock trading, that those that dont have access to this technology could quickly be eclipsed and made redundant by those that do. Fortunately, there is still time but that time is running out. Should a critical mass of governments, companies, or individuals get access to this technology, theyll have massive advantages over those that dont.

We are not ready for quantum AI, and we are running out of time.

See more: Top Performing Artificial Intelligence Companies

See original here:

Quantum AI: Are We Ready? - Datamation

Posted in Ai | Comments Off on Quantum AI: Are We Ready? – Datamation

A passion for AI in healthcare – Healthcare IT News

Posted: at 4:25 am

Artificial intelligence (AI) will play a major role in the future of healthcare, according to Dr Ngiam.

"Being able to support clinical decisions based on highly accurate predictions of patients' outcomes would be a game changer. If this is done at population scale, I think that will change the way we practice medicine as we know it,"he says.

Over the years, Dr Ngiam has been awarded numerous awards for his work in research and education, including the ExxonMobil-NUS research fellowship for clinicians in 2007.

Innovation in healthcare

At NUHS, he is responsible for overseeing technology deployment in the western healthcare cluster and serves as chief advisor to the NUHS Centre for Innovation in Healthcare.

"Being able to launch some of these platforms and realise the potential of the AI tools in clinical practice is probably one of the key highlights of my career,"Dr Ngiam says. "Specifically in the last four years, we've managed to build, launch and operate Discovery AI, which is a platform that allows our clinicians and researchers to use the data we have to develop AI tools."

"If you look at the world of healthcare AI, there are many publications and papers but very few actual clinical use cases,"Dr Ngiam explains. "What we want to do in the next few years is deploy and scale AI tools that we've been doing so much research on in clinical practice. And to that end, we have built a platform called Endeavour AI, which we'll be launching soon."

One common concern is that AI could erode the relationship between patients and clinicians, but Dr Ngiam refutes this.

"These AI tools do not replace the clinicians in any way. They support the clinicians in performing the sort of repetitive menial tasks that should be done by machines,"he says. "Ethical and legal considerations in the way doctors use AI decision support need to be addressed through rigorous validation of AI tools, as well as appropriate training of clinicians in their use."

Working together

Dr Ngiam is also associate professor, department of surgery at Yong Loo Lin school of medicine, where he researches AI applications in healthcare and endocrine and metabolic surgery.

He is a strong advocate of interdisciplinary collaboration between the schools of medicine, engineering and computer science.

"Clinicians don't have the technical ability to build an AI tool ground up from scratch by themselves, so typically we need to collaborate with computer science or data scientists,"he says. "This is a prime example of how interdisciplinary collaboration can result in a product that is applicable to the healthcare system, as well as generate new knowledge in computer science and data science."

Dr Ngiam is speaking at the HIMSS21 APAC Conference during the keynote session,Getting Personal with Emerging Tech. This fully digital event will take place on 18 & 19 October and is free for all healthcare providers. Registerhere.

Follow this link:

A passion for AI in healthcare - Healthcare IT News

Posted in Ai | Comments Off on A passion for AI in healthcare – Healthcare IT News

Artificial intelligence is smart, but does it play well with others? – MIT News

Posted: at 4:25 am

When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These "superhuman" AIs are unmatched competitors, but perhaps harder than competing against humans is collaborating with them. Can the same technology get along with people?

In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it has never met before. In single-blind experiments, participants played two series of the game: one with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

The results surprised the researchers. Not only were the scores no better with the AI teammate than with the rule-based agent, but humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well. A paper detailing this study has been accepted to the 2021 Conference on Neural Information Processing Systems (NeurIPS).

"It really highlights the nuanced distinction between creating AI that performs objectively well and creating AI that is subjectively trusted or preferred," says Ross Allen, co-author of the paper and a researcher in the Artificial Intelligence Technology Group. "It may seem those things are so close that there's not really daylight between them, but this study showed that those are actually two separate problems. We need to work on disentangling those."

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning.

A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical "reward" by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI arent programmed to follow "if/then" statements, because the possible outcomes of the human tasks they're slated to tackle, like driving a car, are far too many to code.

"Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won't necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data Allen says. "The sky's the limit in what it could, in theory, do."

Bad hints, bad plays

Today, researchers are using Hanabi to test the performance of reinforcement learning models developed for collaboration, in much the same way that chess has served as a benchmark for testing competitive AI for decades.

The game of Hanabi is akin to a multiplayer form of Solitaire. Players work together to stack cards of the same suit in order. However, players may not view their own cards, only the cards that their teammates hold. Each player is strictly limited in what they can communicate to their teammates to get them to pick the best card from their own hand to stack next.

The Lincoln Laboratory researchers did not develop either the AI or rule-based agents used in this experiment. Both agents represent the best in their fields for Hanabi performance. In fact, when the AI model was previously paired with an AI teammate it had never played with before, the team achieved the highest-ever score for Hanabi play between two unknown AI agents.

"That was an important result," Allen says. "We thought, if these AI that have never met before can come together and play really well, then we should be able to bring humans that also know how to play very well together with the AI, and they'll also do very well. That's why we thought the AI team would objectively play better, and also why we thought that humans would prefer it, because generally we'll like something better if we do well."

Neither of those expectations came true. Objectively, there was no statistical difference in the scores between the AI and the rule-based agent. Subjectively, all 29 participants reported in surveys a clear preference toward the rule-based teammate. The participants were not informed which agent they were playing with for which games.

"One participant said that they were so stressed out at the bad play from the AI agent that they actually got a headache," says Jaime Pena, a researcher in the AI Technology and Systems Group and an author on the paper. "Another said that they thought the rule-based agent was dumb but workable, whereas the AI agent showed that it understood the rules, but that its moves were not cohesive with what a team looks like. To them, it was giving bad hints, making bad plays."

Inhuman creativity

This perception of AI making "bad plays" links to surprising behavior researchers have observed previously in reinforcement learning work. For example, in 2016, when DeepMind's AlphaGo first defeated one of the worlds best Go players, one of the most widely praised moves made by AlphaGo was move 37 in game 2, a move so unusual that human commentators thought it was a mistake. Later analysis revealed that the move was actually extremely well-calculated, and was described as genius.

Such moves might be praised when an AI opponent performs them, but they're less likely to be celebrated in a team setting. The Lincoln Laboratory researchers found that strange or seemingly illogical moves were the worst offenders in breaking humans' trust in their AI teammate in these closely coupled teams. Such moves not only diminished players' perception of how well they and their AI teammate worked together, but also how much they wanted to work with the AI at all, especially when any potential payoff wasnt immediately obvious.

"There was a lot of commentary about giving up, comments like 'I hate working with this thing,'" adds Hosea Siu, also an author of the paper and a researcher in the Control and Autonomous Systems Engineering Group.

Participants who rated themselves as Hanabi experts, which the majority of players in this study did, more often gave up on the AI player. Siu finds this concerning for AI developers, because key users of this technology will likely be domain experts.

"Let's say you train up a super-smart AI guidance assistant for a missile defense scenario. You aren't handing it off to a trainee; you're handing it off to your experts on your ships who have been doing this for 25 years. So, if there is a strong expert bias against it in gaming scenarios, it's likely going to show up in real-world ops," he adds.

Squishy humans

The researchers note that the AI used in this study wasn't developed for human preference. But, that's part of the problem not many are. Like most collaborative AI models, this model was designed to score as high as possible, and its success has been benchmarked by its objective performance.

If researchers dont focus on the question of subjective human preference, "then we won't create AI that humans actually want to use," Allen says. "It's easier to work on AI that improves a very clean number. It's much harder to work on AI that works in this mushier world of human preferences."

Solving this harder problem is the goal of the MeRLin (Mission-Ready Reinforcement Learning) project, which this experiment was funded under in Lincoln Laboratory's Technology Office, in collaboration with the U.S. Air Force Artificial Intelligence Accelerator and the MIT Department of Electrical Engineering and Computer Science. The project is studying what has prevented collaborative AI technology from leaping out of the game space and into messier reality.

The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year.

"You can imagine we rerun the experiment, but after the fact and this is much easier said than done the human could ask, 'Why did you do that move, I didn't understand it?" If the AI could provide some insight into what they thought was going to happen based on their actions, then our hypothesis is that humans would say, 'Oh, weird way of thinking about it, but I get it now,' and they'd trust it. Our results would totally change, even though we didn't change the underlying decision-making of the AI," Allen says.

Like a huddle after a game, this kind of exchange is often what helps humans build camaraderie and cooperation as a team.

"Maybe it's also a staffing bias. Most AI teams dont have people who want to work on these squishy humans and their soft problems," Siu adds, laughing. "It's people who want to do math and optimization. And that's the basis, but that's not enough."

Mastering a game such as Hanabi between AI and humans could open up a universe of possibilities for teaming intelligence in the future. But until researchers can close the gap between how well an AI performs and how much a human likes it, the technology may well remain at machine versus human.

See the article here:

Artificial intelligence is smart, but does it play well with others? - MIT News

Posted in Ai | Comments Off on Artificial intelligence is smart, but does it play well with others? – MIT News

Clearview AI Says It Can Do the Computer Enhance Thing – Gizmodo

Posted: at 4:25 am

A security camera in the Port Authority Trans-Hudson (PATH) station at the World Trade Center in New York in 2007; used here as stock photo.Photo: Mario Tama (Getty Images)

Sketchy face recognition company Clearview AI has inflated its stockpile of scraped images to over 10 billion, according to its co-founder and CEO Hoan Ton-That. Whats more, he says the company has new tricks up its sleeve, like using AI to draw in the details of blurry or partial images of faces.

Clearview AI has reportedly landed contracts with over 3,000 police and government customers including 11 federal agencies, which it says use the technology to identify suspects when it might otherwise be impossible. In April, a BuzzFeed report citing a confidential source identified over 1,800 public agencies that had tested or currently uses its products, including everything from police and district attorneys offices to Immigration and Customs Enforcement and the U.S. Air Force. It also reportedly has worked with dozens of private companies including Walmart, Best Buy, Albertsons, Rite Aid, Macys, Kohls, AT&T, Verizon, T-Mobile, and the NBA.

Clearview has landed such deals despite facing considerable legal trouble over its unauthorized acquisition of those billions of photos, including state and federal lawsuits claiming violations of biometrics privacy laws, a consumer protection suit brought by the state of Vermont, the companys forced exit from Canada, and complaints to privacy regulators in at least five other countries. There have also been reports detailing Ton-Thats historic ties to far-right extremists (which he denies) and pushback against the use of face recognition by police in general, which has led to bans on such use in over a dozen U.S. cities.

In an interview with Wired on Monday, Ton-That claimed that Clearview has now scraped over 10 billion images from the open web for use in its face recognition database. According to the CEO, the company is also rolling out a number of machine learning features, including one that uses AI to reconstruct faces that are obscured by masks.

Specifically, Ton-That told Wired that Clearview is working on deblur and mask removal tools. The first feature should be familiar to anyone whos ever used an AI-powered image upscaling tool, taking a lower-quality image and using machine learning to add extra details. The mask removal feature uses statistical patterns found in other images to guess what a person might look like under a mask. In both cases, Clearview would essentially be offering informed guesswork. I mean, what could go wrong?

G/O Media may get a commission

As Wired noted, quite a lot. Theres a very real difference between using AI to upscale Marios face in Super Mario 64 and using it to just sort of suggest what a suspects face might look like to cops. For example, existing face recognition tools have been repeatedly assessed as riddled with racial, gender, and other biases, and police have reported extremely high failure rates in its use in criminal investigations. Thats before adding in the element of the software not even knowing what a face really looks likeits hard not to imagine such a feature being used as a pretext by cops to fast-track investigative leads.

I would expect accuracy to be quite bad, and even beyond accuracy, without careful control over the data set and training process, I would expect a plethora of unintended bias to creep in, MIT professor Aleksander Madry told Wired. Even if it did work, Madry added, Think of people who masked themselves to take part in a peaceful protest or were blurred to protect their privacy.

Clearviews argument goes a little something like this: Were just out here building tools, and its up for the cops to decide how to use them. For example, Ton-That assured Wired that all of this is fine because the software cant actually go out there and arrest anyone by itself.

Any enhanced images should be noted as such, and extra care taken when evaluating results that may result from an enhanced image, Ton-That told the magazine. ... My intention with this technology is always to have it under human control. When AI gets it wrong it is checked by a person. After all, its not like police have a long and storied history of using junk science to justify misconduct or prop up arrests based on flimsy evidence and casework, which often goes unquestioned by courts.

Ton-That is, of course, not that naive to think that police wont use these kinds of capabilities for purposes like profiling or padding out evidence. Again, Clearviews backstory is full of unsettling ties to right-wing extremistslike thereactionary troll and accused Holocaust denier Chuck C. Johnsonand Ton-Thats track record is full of incidents where it looks an awful lot like hes exaggerating capabilities or deliberately stoking controversy as a marketing tool. Clearview itself is fully aware of the possibilities for questionable use by police, which is why the companys marketing once advertised that cops could run wild with their tools and the company later claimed to be building accountability and anti-abuse features after getting its hooks into our justice system.

The co-founder added in his interview with Wired that he is not a political person at all, and Clearview is not political either. Ton-That added, Theres no left-wing way or right-wing way to catch a criminal. And we engage with people from all sides of the political aisle.

Read more from the original source:

Clearview AI Says It Can Do the Computer Enhance Thing - Gizmodo

Posted in Ai | Comments Off on Clearview AI Says It Can Do the Computer Enhance Thing – Gizmodo

AlphaFold Is The Most Important Achievement In AIEver – Forbes

Posted: at 4:25 am

DeepMind's AlphaFold represents the first time a significant scientific problem has been solved by ... [+] AI.

It can be difficult to distinguish between substance and hype in the field of artificial intelligence. In order to stay grounded, it is important to step back from time to time and ask a simple question: what has AI actually accomplished or enabled that makes a difference in the real world?

This summer, DeepMind delivered the strongest answer yet to that question in the decades-long history of AI research: AlphaFold, a software platform that will revolutionize our understanding of biology.

In 1972, in his acceptance speech for the Nobel Prize in Chemistry, Christian Anfinsen made a historic prediction: it should in principle be possible to determine a proteins three-dimensional shape based solely on the one-dimensional string of molecules that comprise it.

Finding a solution to this puzzle, known as the protein folding problem, has stood as a grand challenge in the field of biology for half a century. It has stumped generations of scientists. One commentator in 2007 described it as one of the most important yet unsolved issues of modern science.

AI just solved it.

(While use of the word solved has generated some disagreement in the community, sometimes devolving into semantics, most experts closest to the topic agree that AlphaFold can indeed be considered a solution to the protein folding problem.)

Why does the protein folding problem matter? And why has it been so hard to solve?

Proteins are at the center of life itself. As prominent biologist Arthur Lesk put it, In the drama of life at a molecular scale, proteins are where the action is.

Proteins are involved in basically every important activity that happens inside every living thing, yourself included: digesting food, enabling muscles to contract, moving oxygen throughout the body, attacking foreign viruses and bacteria. Your hormones are made out of proteins; so is your hair.

To put it simply, proteins are so important because they are so versatile. Proteins are able to undertake a vast array of different structures and functions, far more than other types of biomolecules (e.g., lipids or carbohydrates). This incredible versatility is a direct consequence of how proteins are built.

Every protein is comprised of a string of building blocks known as amino acids linked together in a particular order. There are 20 different types of amino acid. In one sense, then, protein structure is elegantly simple: each protein is defined by its one-dimensional sequence of amino acids, with 20 different amino acids to choose from. Proteins can range from a few dozen to several thousand amino acids in length.

But proteins do not stay one-dimensional. In order to become functional, they first fold into complex three-dimensional shapes.

A proteins shape relates closely to its function. To take one example, antibody proteins fold into shapes that enable them to precisely identify and target particular foreign bodies, like a key fitting into a lock. Understanding the shape that proteins will fold into is thus essential to understanding how organisms function, and ultimately how life itself works.

Heres the challenge: the number of different configurations that a protein might fold into based on its amino acid sequence is astronomical. Per Levinthals paradox, any given protein can theoretically adopt something like 10^300 different configurations. To frame that figure more vividly, it would take longer than the age of the universe for a protein to fold into every configuration available to it, even if it attempted millions of configurations per second. Yet somehow, out of all of these possible configurations, each protein spontaneously folds into one particular shape and carries out its biological purpose accordingly.

Thus, knowing how proteins fold is both ludicrously difficult and absolutely essential to understanding biological processes. Little wonder that the protein folding problem has been something of a holy grail in the field of biology for decades. Enter AlphaFold.

AlphaFolds coming-out party was the Critical Assessment of Protein Structure Prediction (CASP) competition in November 2020. Held every other year, CASP is the most important event in this fieldin effect, the Olympics for protein folding.

The competitions format is simple. Contestants are given the one-dimensional amino acid sequences for roughly 100 proteins whose three-dimensional structures have been experimentally determined but are not yet publicly available. Using the amino acid sequences as inputs, the competitors generate predictions for the proteins structures, which are then compared against the ground truth structures to determine their accuracy.

AlphaFolds performance at last years CASP was historic, far eclipsing any other method to solve the protein folding problem that humans have ever attempted. On average, DeepMinds AI system successfully predicted proteins three-dimensional shapes to within the width of about one atom. The CASP organizers themselves declared that the protein folding problem had been solved.

Before AlphaFold, we knew the 3-D structures for only about 17% of the roughly 20,000 proteins in the human body. Those protein structures that we did know had been painstakingly worked out in the laboratory over the decades through tedious experimental methods like X-ray crystallography and nuclear magnetic resonance, which require multi-million-dollar equipment and months or even years of trial and error.

Suddenly, thanks to AlphaFold, we now have 3-D structures for virtually all (98.5%) of the human proteome. Of these, 36% are predicted with very high accuracy and another 22% are predicted with high accuracy.

At last year's CASP, AlphaFold accurately predicted the shapes of proteins to within the width of ... [+] about one atom. Two protein examples are shown here, with AlphaFold's prediction (blue) overlaid on the protein's actual structure (green).

CASP co-founder and long-time protein folding expert John Moult put the AlphaFold achievement in historical context: This is the first time a serious scientific problem has been solved by AI.

Evolutionary biologist Andrei Lupas was even more effusive: This will change medicine. It will change research. It will change bioengineering. It will change everything. AlphaFold has already enabled Lupas lab to determine the structure of a protein that had eluded it for a decade.

How did DeepMind achieve this historic feat?

As with any machine learning effort, the answer starts with training data. AlphaFold was trained on a few different publicly available datasets; helpfully, much important data in this field is open-source. The Protein Data Bank (PDB) is a database containing the three-dimensional structures and associated amino acid sequences for virtually all proteins whose structures have been determined by mankindaround 180,000 in total, spanning human and non-human proteins. Another database, UniProt, contains the amino acid sequences (without structures) for nearly two hundred million more proteins.

The AlphaFold AI model is built with transformers, the same cutting-edge neural network architecture that powers well-known language models like GPT-3 and BERT. Transformers have taken the world of machine learning by storm since being introduced by Google Brain researchers in a seminal 2017 paper. The AlphaFold team created a new type of transformer designed specifically to work with three-dimensional structures, which they call Invariant Point Attention (IPA).

Compared to previous efforts to solve protein folding computationally, one noteworthy characteristic of AlphaFolds design is how massively recursive and iterative it is. The model is architected to maximize information flow at every step; hypotheses pass back and forth prolifically among AlphaFolds many components, enabling the overall system to develop an increasingly accurate prediction of a proteins structure.

A comprehensive overview of AlphaFolds technical details can be found in DeepMinds two recently published Nature articles. One more big-picture observation is worth noting here: while DeepMind does have access to far greater computing resources than the typical academic lab, AlphaFold does not merely represent a triumph of brute-force computational power. The amount of compute required to train AlphaFold was in fact modest relative to other high-profile AI models. Building AlphaFold required brilliant software engineering and several significant machine learning innovations. DeepMind is the worlds most advanced AI research group, and it showed.

In the words of Columbia Universitys Mohammed AlQuraishi: AlphaFold is both a tour de force of technical innovation and a beautifully designed learning machine, easily containing the equivalent of six or seven solid ML papers but somehow functioning as a single force of nature.

With all this said, it is important to note that AlphaFold has meaningful limitations.

Its predictions are not always as accurate as more traditional experimental methods. It predicts one stable conformation per protein, but proteins are dynamic and may change shape as they move through the body. Edge caseslike intrinsically disordered proteins and unnatural amino acidscan trip AlphaFold up.

AlphaFold generates predictions about individual protein structures, but it sheds little light on multiprotein complexes, protein-DNA interactions, protein-small molecule interactions, and the likedynamics that are essential to understand for many biomedical use cases. And because (like any AI system) AlphaFold has learned to make predictions based on its training data, it may struggle to accurately predict the shapes of unusual new proteins, including de novo protein designs not found in nature.

Yet there is a broader point to keep in mind here: when the challenges that today remain beyond AlphaFolds grasp get solved, those solutions will themselves almost certainly be powered by deep learning. And the pace of progress will be relentless. For instance, new research out of George Churchs lab at Harvard has already improved on the AlphaFold work in important ways. Meanwhile, DeepMind has indicated that it plans to tackle protein complexes next.

The AI genie is out of the bottle; structural biology (and the life sciences more broadly) will never be the same. AlphaFold is just the beginning.

The most important part of the AlphaFold story happened this summer. In July 2021, in a move whose effects will be felt for years to come, DeepMind open-sourced AlphaFold and its associated protein structures.

The London-based AI lab made AlphaFolds source code freely available online and published two peer-reviewed articles in Nature detailing its research methodology. Even more important, it launched an online database containing the three-dimensional structures for over 350,000 proteinsagain, completely open-source and freely available. This includes structures for nearly every protein in the human body, as well as for 20 other scientifically relevant species like the fruit fly and the mouse.

To put this in perspective, before AlphaFold, humanity had collectively figured out the three-dimensional structures for roughly 180,000 proteins.

And this is just the beginning. DeepMind says that it plans to release structures for over one hundred million more proteins in the coming monthsthat is, nearly every protein whose genetic sequence is known to science.

In the words of EMBL-EBI Director Ewan Birney: This will be one of the most important datasets since the mapping of the Human Genome.

The implications of DeepMinds decision to open-source AlphaFold are hard to overstate.

As the history of technology makes clear, nothing beats open, permissionless innovation. From the distributed creativity unleashed by the Internet twenty-five years ago to the success and ubiquity of open-source platforms like Kubernetes and Linux, true Cambrian explosions of technology development happen when everyonenot just small closed groupscan freely engage and contribute.

As legendary Sun Microsystems cofounder Bill Joy put it: No matter who you are, most of the smartest people work for someone else.

With AlphaFold open-source, an entire ecosystem of biotechnology research and startups will spring up around it in the years ahead. No one can anticipate the many different directions that innovation will flow once millions of high-quality protein structures are freely available at the click of a button. Reflecting the importance and diversity of proteins themselves, the possibilities are limitless.

Early use cases hint at the potential.

Researchers at UCSF have used AlphaFold to uncover previously unknown details about a key SARS-CoV-2 protein, which will advance the development of COVID-19 therapeutics. Using AlphaFold, a team at the University of Colorado Boulder was able to pinpoint a particularly tricky bacterial protein structure, a discovery that will aid their efforts to combat antibiotic resistance, a looming public health crisis. The Boulder team had spent years unsuccessfully trying to determine this proteins structure; with AlphaFold, they learned it in 15 minutes.

When it comes to commercial opportunity and startup activity, there are two applications in particular for which AlphaFold attracts a lot of attention: drug discovery and protein design.

In a nutshell, designing a new drug entails identifying a compound in the bodymost often a proteinthat you want to target, and then finding a molecule (the drug) that will successfully bind to that target, producing some beneficial health outcome. Knowing the three-dimensional shape of a prospective protein target is essential to this process because a proteins shape defines which and how other molecules will bind to it. AlphaFold makes available a vast new set of drug target candidates to explore.

One area of drug discovery for which AlphaFold holds particular promise is neglected diseases. Neglected diseases are those for which little research funding is directed to develop treatments, often because the disease affects very few people or because the populations that it affects are low-income and thus represent a less compelling market opportunity.

AlphaFold helps level the playing field in the search for therapeutics for these disease states by, for the first time, making relevant protein structures instantly available without the need for costly laboratory work. DeepMind has already announced a partnership with the nonprofit Drugs for Neglected Diseases initiative (DNDi) to tackle deadly neglected diseases like Chagas disease and Leishmaniasis.

But it is important to keep expectations here tempered. AlphaFold will not transform drug discovery overnight, for a few reasons.

AlphaFolds structures are not always accurate and granular enough for drug discovery purposes, particularly when it comes to a proteins active binding sites. In addition, the fact that AlphaFold predicts structures for individual proteins in isolation is a major limitation: what is most essential to understand for purposes of drug development is the structure of protein-drug interactions. Building a computational system to predict protein-drug structures is an even more daunting puzzle than individual protein folding (due above all to training data requirements); it remains out of reach today.

And ultimately, target discovery is just the first step in the very long and expensive process of creating a new drug. While AlphaFold may help accelerate this initial phase, it can do little about the many less tractable downstream bottlenecks (namely, human clinical trials) in the years-long journey to bring a new drug to market.

Protein researchers and entrepreneurs seem even more excited about AlphaFolds potential to boost the burgeoning field of de novo protein design.

The basic insight motivating protein design is that there is a vast universe of proteins that could theoretically be constructed, of which only an infinitesimal fraction have actually ended up in the world as a result of natural evolution. By exploring this uncharted universe of possible structures not found in nature, researchers seek to bring novel proteins into the world that are tailor-made for particular applications, from fighting disease to slowing climate change.

AlphaFold may prove to be a powerful tool in these efforts. For instance, verifying that a particular de novo protein candidate will actually fold up in a structurally viable way is a major gating function; before AlphaFold, this was costly and time-consuming and therefore could only be done for a small handful of candidates. With AlphaFold, it is trivial to map an amino acid sequence to a hypothesized three-dimensional structure, enabling much more rapid experimentation.

DeepMind and the University of Portsmouth in the U.K. recently announced a partnership to use AlphaFold to help design new types of proteins that more efficiently break down plastic waste, in order to combat pollution.

Structure-informed enzyme engineering is a core aspect of our work but obtaining those structures has, until now, always been the bottleneck in our research pipeline, said University of Portsmouth professor Andy Pickford. Being given access to AlphaFold has transformed our research strategy.

AlphaFold is a scientific achievement of the first order. It represents the first time that AI has significantly advanced the frontiers of humanitys scientific knowledge. Credible industry observers have speculated that it might one day win the researchers at DeepMind a Nobel Prize.

This is a history book moment, said protein folding researcher Carlos Outeiral.

At the same time, AlphaFold is no silver bullet for real-world challenges like drug discovery. Figuring out the most viable and impactful ways to translate AlphaFolds fundamental insights into products that create value in the real world will entail years of hard work from researchers and entrepreneurs. But make no mistake: the long-term impact will be transformative.

The European Molecular Biology Laboratory (EMBL), the non-profit research organization in charge of stewarding AlphaFold, summed it up well: AlphaFold will provide new insights and understanding of fundamental processes related to health and disease, with applications in biotechnology, medicine, agriculture, food science and bioengineering. It will probably take one or two decades until the full impact of this development can be properly assessed.

Go here to read the rest:

AlphaFold Is The Most Important Achievement In AIEver - Forbes

Posted in Ai | Comments Off on AlphaFold Is The Most Important Achievement In AIEver – Forbes

Clearview AI Has New Tools to Identify You in Photos – WIRED

Posted: at 4:25 am

Clearview AI has stoked controversy by scraping the web for photos and applying facial recognition to give police and others an unprecedented ability to peer into our lives. Now the companys CEO wants to use artificial intelligence to make Clearviews surveillance tool even more powerful.

It may make it more dangerous and error-prone as well.

Clearview has collected billions of photos from across websites that include Facebook, Instagram, and Twitter and uses AI to identify a particular person in images. Police and government agents have used the companys face database to help identify suspects in photos by tying them to online profiles.

The companys cofounder and CEO, Hoan Ton-That, tells WIRED that Clearview has now collected more than 10 billion images from across the webmore than three times as many as has been previously reported.

Ton-That says the larger pool of photos means users, most often law enforcement, are more likely to find a match when searching for someone. He also claims the larger data set makes the companys tool more accurate.

Clearview combined web-crawling techniques, advances in machine learning that have improved facial recognition, and a disregard for personal privacy to create a surprisingly powerful tool.

Ton-That demonstrated the technology through a smartphone app by taking a photo of the reporter. The app produced dozens of images from numerous US and international websites, each showing the correct person in images captured over more than a decade. The allure of such a tool is obvious, but so is the potential for it to be misused.

Clearviews actions sparked public outrage and a broader debate over expectations of privacy in an era of smartphones, social media, and AI. Critics say the company is eroding personal privacy. The ACLU sued Clearview in Illinois under a law that restricts the collection of biometric information; the company also faces class action lawsuits in New York and California. Facebook and Twitter have demanded that Clearview stop scraping their sites.

The pushback has not deterred Ton-That. He says he believes most people accept or support the idea of using facial recognition to solve crimes. The people who are worried about it, they are very vocal, and that's a good thing, because I think over time we can address more and more of their concerns, he says.

Some of Clearviews new technologies may spark further debate. Ton-That says it is developing new ways for police to find a person, including deblur and mask removal tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a persons face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images.

These capabilities could make Clearviews technology more attractive but also more problematic. It remains unclear how accurately the new techniques work, but experts say they could increase the risk that a person is wrongly identified and could exacerbate biases inherent to the system.

Without careful control over the data set and training process I would expect a plethora of unintended bias to creep in.

Aleksander Madry, professor, MIT

I would expect accuracy to be quite bad, and even beyond accuracy, without careful control over the data set and training process I would expect a plethora of unintended bias to creep in, says Aleksander Madry, a professor at MIT who specializes in machine learning. Without due care, for example, the approach might make people with certain features more likely to be wrongly identified.

Even if the technology works as promised, Madry says, the ethics of unmasking people is problematic. Think of people who masked themselves to take part in a peaceful protest or were blurred to protect their privacy, he says.

Ton-That says tests have found the new tools improve the accuracy of Clearviews results. Any enhanced images should be noted as such, and extra care taken when evaluating results that may result from an enhanced image, he says.

Read the original here:

Clearview AI Has New Tools to Identify You in Photos - WIRED

Posted in Ai | Comments Off on Clearview AI Has New Tools to Identify You in Photos – WIRED

Page 99«..1020..9899100101..110120..»