Page 12«..11121314..2030..»

Category Archives: Artificial Intelligence

How Artificial Intelligence is Accelerating Innovation in Healthcare – Goldman Sachs

Posted: April 27, 2023 at 2:54 pm

Healthcare one of the largest sectors of the U.S. economy is among the many industries with significant opportunities for the use of artificial intelligence (AI) and machine learning (ML), says Salveen Richter, lead analyst for the U.S. biotechnology sector at Goldman Sachs Research.

We are in an exciting period when we are seeing the convergence of technology and healthcare two key economic sectors and we have to assume it will result in significant innovation, she says. We spoke with Richter, one of the authors of our in-depth Byte-ology report, which includes contributions from Goldman Sachs healthcare and technology research teams, about the integration of AI/ML into healthcare, the most promising applications for this technology and the landscape for venture capital funding in the field of byte-ology.

Why is healthcare ripe for disruption?

We see the combination of healthcares vast, multi-modal datasets and AI/MLs competitive advantages in efficiency, personalization and effectiveness as poised to drive an innovative wave across healthcare.

From a data standpoint, the healthcare industry produces and relies upon massive amounts of data from diverse sources. That creates a rich environment for applying AI and ML. The need for these technologies is there given the inefficiencies in the healthcare system. It is estimated that it takes more than eight years and $2 billion to develop a drug, and the likelihood of failure is quite high with only one of ten candidates expected to gain regulatory approval. AI, including generative AI, is among the technologies that have the potential to create safer, more efficacious drugs and to streamline personalized care.

The bottom line is we are in an exciting period when we are seeing the convergence of technology and healthcare two key economic sectors and we have to assume that out of this will come a wave of innovation.

What changes has AI already brought to the healthcare industry?

Some of the earliest uses of AI in healthcare were in diagnostics and devices, including areas such as radiology, pathology and patient monitoring. The PAPNET Testing System, a computer-assisted cervical smear rescreening device, back in 1995 was the first FDA-authorized AI/ML enabled medical device. In the 2000s, other authorizations involved digital image capture, analysis of cells, bedside monitoring of vital signs, and predictive warnings for incidents where medical intervention may be needed.

Big Tech companies have also been involved, stepping in as cloud solution providers and applying their technological expertise in areas such as wearable devices, predictive modeling and virtual care. One widely talked about achievement involved a deep learning algorithm that effectively solved the decades-old problem of predicting the shape a protein will fold into based on its amino acid sequences, which is crucial for drug discovery.

Where are we now in the integration of AI into the healthcare sector?

Despite all previous innovation, we are still in the early innings. While the promise of AI/ML in healthcare has been there for decades, we believe its role came into the spotlight during the Covid-19 pandemic response. AI helped companies develop Covid-19 mRNA vaccines and therapeutics at unprecedented speeds. Further, the Covid-19 pandemic underscored the need for digital solutions in healthcare to improve patient access and outcomes, and represented a key inflection point for telehealth and remote monitoring.

We believe that these successes further drove enthusiasm for the space as they showed a clear benefit of incorporating AI/ML and other technologies to improve patient outcomes at a much faster rate than would be expected with traditional methods.

What are some of the more promising AI-driven applications that could be coming to healthcare in the near future?

In our newest Byte-ology report, we outlined the technologies that could be transformative in healthcare, which include deep learning, cloud computing, big data analytics and blockchain. We also provided use cases across drug development, clinical trials, healthcare analytics, tools and diagnostics, and personalized care.

Heres one example: in drug development, AI/ML can be used to identify novel targets, design drugs with favorable properties and predict drug interactions to minimize the need for the costly traditional methodology of wet lab trial and error development.

Are there areas within health care that are more likely than others to benefit from AI?

Use cases for AI/ML can be found in virtually any segment of healthcare the difference is how much or how long it has been used in a given sector, how validated the use case is and how difficult new technological advancements would be to implement within the healthcare system. For example, there is a history of using AI tools for radiology and pathology, whereas many believe more hard evidence is needed to understand AI/MLs benefit in areas such as designing drugs, predicting patients most likely to respond to certain drugs and digitizing labs.

Even in sectors where its adoption is in the early stages, we believe that AI/MLs potential advantages will not be ignored, but rather closely studied and increasingly implemented over time. Uptake would greatly benefit from regulatory support, standardized benchmarks to evaluate performance, public forums to improve collaboration and transparency and, importantly, proof-of-concept via a demonstrated benefit to patients and healthcare professionals which we have started to see emerge.

What are the barriers or hurdles for AI in healthcare?

There are cultural obstacles, such as the healthcare industry relying on patents and exclusivity. That raises questions about how IP can be protected without slowing progress, or how information can be shared as it is in software engineering research that benefits from open-source data.

The hesitancy around AI/ML may further be exacerbated by the need for better surveillance systems to protect patients from hacking or breach events, the lack of continuing education for healthcare professionals on the benefits of these technologies and the concern that AI/ML models may be susceptible to bias as a result of historical underrepresentation embedded in training data.

Finally, some stakeholders may be taking a wait-and-see approach, remaining on the sidelines until firmer evidence of benefits being achieved emerges before investing in the resources necessary to incorporate these technologies.

Are there specific uses or benefits of generative AI in particular to healthcare?

Generative AI, including ChatGPT, presents myriad opportunities in healthcare such as synthetic data generation to aid in drug development and diagnostics where data collection would otherwise be expensive or scarce. Some examples here include the development of a model to produce synthetic abnormal brain MRIs to train diagnostic ML models, and the use of zero-shot generative AI to produce novel antibody designs that are unlike those found in existing databases.

Generative AI also can help in designs for novel drugs, repurposing of existing drugs to new indications and analyzing patient-centric factors such as genetics and lifestyle to personalize treatment plans.

ChatGPT specifically could be used to perform administrative tasks such as scheduling appointments and drafting insurance approvals to free up time for physicians, aid healthcare professionals by conveniently summarizing scientific literature, as well as improve patient engagement and education by answering patient questions in a conversational manner.It has also been suggested that ChatGPT could theoretically aid in clinical decision making, such as diagnostics, although it will likely take time for ChatGPT to build enough trustworthiness and validation for this application given the risk of hallucination, when the model outputs false content that may look plausible.

Whats the landscape for VC investment in healthcare AI and how does GS assess these companies?

VC funding continues to support and foster innovation both in early- and late-stage private biotech companies. In 2022, we saw VC funding into AI- and ML-powered healthcare companies remained elevated despite declining amid the market downturn and associated slowdown in VC funding. So far in 2023, amid recession risk and other headwinds, VC deployment in healthcare AI, as elsewhere, has slowed.

Because of the AI/MLs potential advantages in efficiency and effectiveness, how each company utilizes the armamentarium of available and rapidly expanding technologies is an important part of competitive differentiation. We take numerous factors into account when gauging competitive differentiation, such as the quality of the management team, the ultimate goal of the platform, the timeframe in which investors will understand whether this goal has been achieved and how the platform merges the available AI/ML toolkit with proprietary technologies to defend against emerging players.

This article is being provided for educational purposes only. The information contained in this article does not constitute a recommendation from any Goldman Sachs entity to the recipient, and Goldman Sachs is not providing any financial, economic, legal, investment, accounting, or tax advice through this article or to its recipient. Neither Goldman Sachs nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this article and any liability therefore (including in respect of direct, indirect, or consequential loss or damage) is expressly disclaimed.

Go here to see the original:

How Artificial Intelligence is Accelerating Innovation in Healthcare - Goldman Sachs

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence is Accelerating Innovation in Healthcare – Goldman Sachs

Meet the Woman Working to Remove Bias in Artificial Intelligence – Shine My Crown

Posted: at 2:54 pm

136

Dr. Nika White, the author of Inclusion Uncomplicated: A Transformative Guide to Simplify DEI, is president and CEO of Nika White Consulting. Dr. White is an award-winning management and leadership consultant, keynote speaker, published author, and executive practitioner for DEI efforts in the areas of business, government, non-profit and education. Her work helping organizations break barriers and integrate DEI into their business frameworks led to her being recognized by Forbes as a Top 10 Diversity and Inclusion Trailblazer. The focus of Dr. Whites consulting work is to create professional spaces where people can collaborate through a lens of compassion, empathy, and understanding.

Shine My Crown spoke with Dr. White to discuss the growing ChatGPT trend, which has been proved to bebiased, and how companies can address the situation by incorporating diversity and inclusion within their organizations.

Talk to us about the work you do and how you are using your skillset to change the playing field for organizations in dire need of redesigning their DEI framework.

We ensure impact over activity. We co-create solutions with our clients. They have the institutional knowledge and we have the DEI expertise. In this sense, we become an extension of our clients teams. The collaboration enriches the final product and output. We leverage evidence-based data to inform the work.

Data shows that systems like ChatGPT have sometimes proven to produce outputs that are racist, sexist, and factually incorrect. How do engineers who train these artificial intelligence applications work to rectify the unmethodical data it pulls from the internet?

Engineers should begin with understanding where they are in their DEI journey and their own biases. Once you understand your own biases, you can start to address them in yourself and work. Engineers should be trained to understand racist language and systematic racism in data. This will give them the ability to decipher and shift through coded racist data and create programs around it.

In a recent quote, you mentioned that left unchecked AI will regurgitate racist and sexist data and facts about POC, women and the LGBTQ+ community that historically was thought to be true in a culture that perpetuated systematic racism, sexism, and homophobia. Can you expound more upon this statement and what the resolution will be to fix this?

If AI only accounts for data and not historical context, AI could assume that BIPOC dont own homes because they dont want to or have the ability to. Historical context tells us that redlining and continuous, systematic oppression have actually hindered BIPOC from purchasing homes. Engineers must bring that historical context to AI. Thats why having a diverse and well-trained engineering staff is important.

What do you believe is the real reason behind biased forms of technology like ChatGPT? Does it stem from human interference, or is science alone left to blame?

Human interference and science are to blame. Science is programmed to decipher information the best possible way it knows how. Sciences major flaw is agility. AIs capacity to evolve and change is stunted unless the engineers create checks and balances. However, AI can only be as good as the engineers programming them. Engineers must understand their biases to stop them from being programmed in AI.

What are some current efforts you are working on to create spaces where people feel included in their personal and professional environments?

We recently launched a new learning experience, Unravel the Knot.My approach to DEI is of an integrationist, positing that the work of DEI is for all and can be organically incorporated into an individuals personal and professional spaces.I was moved by people Id interacted with who expressed wanting to be a part of cultivating cultures of belonging. Still, they found such an endeavor complicated, polarizing, and defeating. These sentiments are a barrier for many who desire to engage deeper.

Every day, we hear that the work of Diversity, Equity, and Inclusion (DEI) is complicated whether from businesses, employees, society in general, or the practitioners themselves. And the truth isyesDEI can be complicated because the issues of DEI are complex. But they dont have to be.

Without a collective shift in how we relate to one another as humans, without the willingness to recognize our personal biases or withhold assumptions and sitwith the discomfort, systems of oppression will remain locked in place. But, if wecenter on ways to uncomplicate DEI, the entry point for more people to engage effectively increases.

This program helps to change how complex many people perceive DEI to be so that the entry point for more people to engage in the work of belongingness increases significantly.This learning experience givescohort members space to go deeper into foundational practical tips and tools, helping them actualize DEI personally and within their organization.Participants will craft their DEI story, learn more about their identity, assess their cultural patterns, learn about emotional intelligence and Lived Experience Intelligence, practice mindfulness, unmask themselves, interrogate their biases, and understand more about inclusive communication.

Read more from the original source:

Meet the Woman Working to Remove Bias in Artificial Intelligence - Shine My Crown

Posted in Artificial Intelligence | Comments Off on Meet the Woman Working to Remove Bias in Artificial Intelligence – Shine My Crown

Opinion: Artificial intelligence is the future of hiring – The San Diego Union-Tribune

Posted: at 2:54 pm

Cooper is a professor of law at California Western School of Law and a research fellow at Singapore University of Social Sciences. He lives in San Diego. Kompella is CEO of industry analyst firm RPA2AI Research and visiting professor for artificial intelligence at the BITS School of Management, Mumbai, and lives in Bangalore, India.

Hiring is the lifeblood of the economy. In 2022, there were 77 million hires in the United States, according to the U.S. Department of Labor. Artificial intelligence is expected to make this hiring process more efficient and more equitable. Despite such lofty goals, there are valid concerns that using AI can lead to discrimination. Meanwhile, the use of AI in the hiring process is widespread and growing by leaps and bounds.

A Society of Human Resources Management survey last year showed that about 80 percent of employers use AI for hiring. And there is good reason for the assist: Hiring is a high-stakes decision for the individual involved and the businesses looking to employ talent. It is no secret, though, that the hiring process can be inefficient and subject to human biases.

AI offers many potential benefits. Consider that human resources teams spend only 7 seconds skimming a resume, a document which is itself a one-dimensional portrait of a candidate. Recruiters instead end up spending more of their time on routine tasks like scheduling interviews. By using AI to automate such routine tasks, human resources teams can spend more quality time on assessing candidates. AI tools can also use a wider range of data points about candidates that can result in a more holistic assessment and lead to a better match. Research shows that the overly masculine language used in job descriptions puts off women from applying. AI can be used to create job descriptions and ads that are more inclusive.

But using AI for hiring decisions can also lead to discrimination. A majority of recruiters in the 2022 Society of Human Resources Management survey identified flaws in their AI systems. For example, they excluded qualified applicants or had a lack of transparency around the way in which the algorithms work. There is also disparate impact (also known as unintentional discrimination) to consider. According to University of Southern California research in 2021, job advertisements are not shown to women despite them being qualified for the roles being advertised. Also, advertisements for high-paying jobs are often hidden from women. Many states suffer a gender pay gap. When the advertisements themselves are invisible, the pay equity gap is likely not going to solve itself, even with the use of artificial intelligence.

Discrimination, even in light of new technologies, is still discrimination. New York City has fashioned a response by enacting Local Law 144, scheduled to come into effect on July 15. This law requires employers to provide notice to applicants when AI is being used to assess their candidacy. AI systems are subject to annual independent third-party audits and audit results must be displayed publicly. Independent audits of such high-stakes AI usage is a welcome move by New York City.

California, long considered a technology bellwether, has been off to a slow start. The California Workplace Technology Accountability Act, a bill that focused on employee data privacy, is now dead. On the anvil are updates to Chapter 5 (Discrimination in Employment) of the California Fair Employment and Housing Act. Initiated a year ago by the Fair Employment and Housing Council (now called the Civil Rights Department), these remain a work in progress. These are not new regulations per se but an update of existing anti-discrimination provisions. The proposed draft is open for public comments but there is no implementation timeline yet. The guidance for compliance, the veritable dos and donts, including penalties for violations, are all awaited. There is also a recently introduced bill in the California Legislature that seeks to regulate the use of AI in business, including education, health care, housing and utilities, in addition to employment.

The issue is gaining attention globally. Among state laws on AI in hiring is one in Illinois that regulates AI tools used for video interviews. At the federal level, the Equal Employment Opportunity Commission has updated guidance on employer responsibilities. And internationally, the European Unions upcoming Artificial Intelligence Act classifies such AI as high-risk and prescribes stringent usage rules.

Adoption of AI can help counterbalance human biases and reduce discrimination in hiring. But the AI tools used must be transparent, explainable and fair. It is not easy to devise regulations for emerging technologies, particularly for a fast-moving one like artificial intelligence. Regulations need to prevent harm but not stifle innovation. Clear regulation coupled with education, guidance and practical pathways to compliance strikes that balance.

Link:

Opinion: Artificial intelligence is the future of hiring - The San Diego Union-Tribune

Posted in Artificial Intelligence | Comments Off on Opinion: Artificial intelligence is the future of hiring – The San Diego Union-Tribune

Director Chopras Prepared Remarks on the Interagency … – Consumer Financial Protection Bureau

Posted: at 2:53 pm

In recent years, we have seen a rapid acceleration of automated decision-making across our daily lives. Throughout the digital world and throughout sectors of the economy, so-called artificial intelligence is automating activities in ways previously thought to be unimaginable.

Generative AI, which can produce voices, images, and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms from consumer fraud to privacy to fair competition.

Today, several federal agencies are coming together to make one clear point: there is no exemption in our nations civil rights laws for new technologies that engage in unlawful discrimination. Companies must take responsibility for their use of these tools.

The Interagency Statement we are releasing today seeks to take an important step forward to affirm existing law and rein in unlawful discriminatory practices perpetrated by those who deploy these technologies.1

The statement highlights the all-of-government approach to enforce existing laws and work collaboratively on AI risks.

Unchecked AI poses threats to fairness and to our civil rights in ways that are already being felt.

Technology companies and financial institutions are amassing massive amounts of data and using it to make more and more decisions about our lives, including whether we get a loan or what advertisements we see.

While machines crunching numbers might seem capable of taking human bias out of the equation, thats not what is happening. Findings from academic studies and news reporting raise serious questions about algorithmic bias. For example, a statistical analysis of 2 million mortgage applications found that Black families were 80 percent more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds. The response of mortgage companies has been that researchers do not have all the data that feeds into their algorithms or full knowledge of the algorithms. But their defense illuminates the problem: artificial intelligence often feels like black boxes behind brick walls.2

When consumers and regulators do not know how decisions are made by artificial intelligence, consumers are unable to participate in a fair and competitive market free from bias.

Thats why the CFPB and other agencies are prioritizing and confronting digital redlining, which is redlining caused through bias present in lending or home valuation algorithms and other technology marketed as artificial intelligence. They are disguised through so-called neutral algorithms, but they are built like any other AI system by scraping data that may reinforce the biases that have long existed.

We are working hard to reduce bias and discrimination when it comes to home valuations, including algorithmic appraisals. We will be proposing rules to make sure artificial intelligence and automated valuation models have basic safeguards when it comes to discrimination.

We are also scrutinizing algorithmic advertising, which, once again, is often marketed as AI advertising. We published guidance to affirm how lenders and other financial providers need to take responsibility for certain advertising practices. Specifically, advertising and marketing that uses sophisticated analytic techniques, depending on how these practices are designed and implemented, could subject firms to legal liability.

Weve also taken action to protect the public from black box credit models in some cases so complex that the financial firms that rely on them cant even explain the results. Companies are required to tell you why you were denied for credit and using a complex algorithm is not a defense against providing specific and accurate explanations.

Developing methods to improve home valuation, lending, and marketing are not inherently bad. But when done in irresponsible ways, such as creating black box models or not carefully studying the data inputs for bias, these products and services pose real threats to consumers civil rights. It also threatens law-abiding nascent firms and entrepreneurs trying to compete with those who violate the law.

I am pleased that the CFPB will continue to contribute to the all-of-government mission to ensure that the collective laws we enforce are followed, regardless of the technology used.

Thank you.

Read the original:

Director Chopras Prepared Remarks on the Interagency ... - Consumer Financial Protection Bureau

Posted in Artificial Intelligence | Comments Off on Director Chopras Prepared Remarks on the Interagency … – Consumer Financial Protection Bureau

2023 – Artificial Intelligence and Higher Ed – The Seattle U Newsroom – The Seattle U Newsroom – News, stories and more

Posted: at 2:53 pm

Seattle University President Eduardo Pealver and College of Science and Engineering Dean Amit Shukla, PhD, penned an opinion piece for the Puget Sound Business Journal weighing the impacts and implications of generative AI in higher education.

Here is the article as it appears in the publication:

Opinion: Generative AI is a powerful tool the requires a human touch

Generative artificial intelligence (AI) is at once intriguing, exciting and, yes, a little disturbing.

For those of us in higher education, these technologies have apparent potential to disrupt traditional teaching and learning models. There is well-founded concern about generative AIs implications for academic integrity along with a recognition that these new technologies can enhance student learning and experience.

We are always looking at ways to help students develop their skills in critical thinking, problem solving, communication, leadership and teamwork so they can continue to shape the world. Far from rendering these sorts of capabilities superfluous, emergent AI technologies only underscore their importance.

The world faces numerous grand challenges around sustainability, public health, access to clean water, energy, food, security and many others. Successfully confronting these challenges requires an education system deeply rooted in the recognition that we all have a responsibility to make the world a better place. We need to educate future leaders who approach these challenges with morality and ethics at the heart of any solutions.

As a university in the Jesuit tradition, we believe that effective learning is always situated in a specific context rooted in previous experience and dependent upon reflection about those experiences. Education becomes most meaningful when it is put into action and reinforced by further reflection. Repeating this cycle over and over again is how transformative learning happens. It is remarkable that some of these same traits of the Jesuit educational model are shared by the reinforcement learning methods used for artificial intelligence.

Early reviews of ChatGPT, an artificial intelligence chatbot, were giddy about its astonishing capabilities. Users regaled us with computer-generated stories about lost socks written in the style of the Declaration of Independence or about removing a peanut butter and jelly sandwich from a VCR in the form of a Biblical verse.

The power of this technology is genuinely impressive and its ability to mimic human language across a broad range of domains is unprecedented. But the technologys basic architecture is untethered to actual meaning. Additionally, these models can be biased by their training data and they can be sensitive to paraphrasing as well as by the need to guess user intent. The power of reinforcement learning is therefore also the source of its greatest weakness.

Although AI models are constantly taking in new information, that information takes the form of new symbolic data without any context. They have no experience (or even conception) of reality. Their sole reality is a world of perceived regularities among symbolic representations and, as a result, they have no way to conceive of concepts like truth and accuracy.

Recent reports have unearthed troubling tendencies. In an essay for Scientific American, NYU psychologist Gary Marcus observed that ChatGPT was prone to hallucinations, made-up facts that ChatGPT would nonetheless assert with great confidence.

One law professor asked ChatGPT to summarize some leading decisions of the National Labor Relations Board and it conjured fictitious cases out of thin air.

In another case, ChatGPT asserted that former Vice President Walter Mondale challenged his own president, Jimmy Carter, for the Democratic nomination in the 1980 election. (For those not alive in 1980, this did not happen, and such assertions will not help students learn history or U.S. electoral politics).

Closer to home, in an essay submitted for one of our classes at Seattle University, ChatGPT described a 2005 Supreme Court case as the cause of another case that had occurred several decades earlier.

On the other hand, many educators are effectively using these tools to supplement and enhance student learning and mastery of concepts from coding to rhetoric.

Generative AI is no replacement for human intelligence. The recent technology is based on a system of machine learning known as Reinforcement Learning from Human Feedback (RLHF). Machine learning does not yet generate what we might call understanding or comprehension.

These RLHF models are based on massive quantities of training data a reward model for reinforcement and an optimization algorithm for finetuning the language model. Their use of language emerges from deep statistical analysis of massive data sets they use to predict the most probable sequence of words in response to the prompt they receive.

Clearly, there are limits to this generative technology and we must be mindful of that.

What ChatGPT and other AI engines based on this technology require is the guidance of educated human beings rooted in the reality and constraints of the world. In an increasingly complex and technologically driven world, the challenges we face are inherently multidisciplinary. They require us to incorporate context and perspectives, learn from our experiences, take ethical actions and evaluate and reflect with empathy to create a more just and humane world. They require leaders to be innovative, inclusive and committed to the truth.

As we continue to build and improve these tools, we must recognize that they will continue to reflect the limitations of the human beings who have created them, as well as the limitations intrinsic to their architecture. Even while they reduce the challenges of certain kinds of work, they generate the need for new kinds of work and reflection.

As these models proliferate and continue to grow in capability, it will become the task of institutions like ours to train future leaders who can understand and manage them by developing, implementing and managing policy for responsible use that is grounded in ethics and morality and in service of humanity.

Artificial intelligence tools are designed by human beings and use learning models trained by the data we provide. It is therefore our responsibility to ensure that AIs use of those inputs contributes to the betterment of the world. It is our responsibility to question the results AI generates and, applying our ethically informed judgment, to correct its biases and inaccuracies. Doing this will continue to require substantial human input, attention and care.

The future demands leaders who are innovative and creative, who can understand and effectively wield the new tools that generative AI is making available. Rather than seeking to suppress or hide from these technologies, higher education needs to respond in a collaborative way to these emerging technologies so we can help our students to use them to augment their own capabilities and enhance their learning.

Finally, we feel it necessary to make clear that this commentary was not written by artificial intelligence. Instead, it was composed by two higher education leaders who are thinking about this subject a lot these days.

We are confident that, no matter what the future of these technologies entails, there will always be a need for thoughtful reflections produced by real people. If higher education responds to emergent technologies in a wise and thoughtful way, it can and will continue to be at the forefront of forming such human beings.

Save the Date: Seattle University will host an Ethics and Technology conference in late June, bringing together great minds in science, tech, ethics and religion, including academic, business and nonprofit leaders.

Read the original here:

2023 - Artificial Intelligence and Higher Ed - The Seattle U Newsroom - The Seattle U Newsroom - News, stories and more

Posted in Artificial Intelligence | Comments Off on 2023 – Artificial Intelligence and Higher Ed – The Seattle U Newsroom – The Seattle U Newsroom – News, stories and more

Current Applications of Artificial Intelligence in Oncology – Targeted Oncology

Posted: at 2:53 pm

Image Credit: ipopba [stock.adobe.com]

The evolution of artificial intelligence (AI) is reshaping the field of oncology by providing new devices to detect cancer, individualize treatments, manage patients, and more.

Given the large number of patients diagnosed with cancer and amount of data produced during cancer treatment, interest in the application of AI to improve oncologic care is expanding and holds potential.

An aspect of care delivery where AI is exciting and holds so much promise is democratizing knowledge and access to knowledge. Generating more data, bringing together the patient data with our knowledge and research, and developing these advanced clinical decision support systems that use AI are going to be ways in which we can make sure clinicians can provide the best care for each individual patient, Tufia C. Haddad, MD, told Targeted OncologyTM.

While cancer treatment options have only improved over past decades, there is an unmet medical need to make these cancer treatments more affordable and personalized for each patient with cancer.1

As we continue to learn about and better understand the use of AI in oncology, experts can improve outcomes, develop approaches to solve problems in the space, and advance the development of treatments that are made available to patients.

AI is a branch of computer science which works with the simulation of intelligent behavior in computers. These computers follow algorithms which are established by humans or learned by the computer to support decisions and complete certain tasks. Under the AI umbrella lay important subfields.

Machine learning is the process in which a computer can improve its own performance by consistently utilizing newly-generated data into an already existing iterative model. According to the FDA, 1 of the potential benefits of machine learning is its ability to create new insights from the vast amount of data generated during the delivery of health care every day.2

Sometimes, we can use machine learning techniques in a way where we are training the computer to, for example, discern benign pathology, benign pathology from malignant pathology, and so we train the computer with annotated datasets, where we are showing the different images of benign vs malignancy. Ultimately, the computer will bring forward an algorithm that we then take separate data sets that are no longer labeled as benign or malignant. Then we continue to train that algorithm and fine tune the algorithm, said Haddad, medical oncologist, associate professor of oncology at the Rochester Minnesota Campus of the Mayo Clinic.

Deep learning is a smaller part of machine learning where mathematical algorithms are installed using multi-layered computational units which resemble human cognition. These include neural networks with different architeture types including recurrent neural networks, convolutional neural network, and long short-term memory.

Danielle S. Bitterman, MD

Many of the applications integrated into commercial systems are proprietary, so it is hard to know what specific AI methods underlie their system. For some applications, even simple rules-based systems still hold value. However, the recent surge in AI advances is primarily driven by more advanced machine learning methods, especially neural network-based deep learning, in which the AI teaches itself to learn patterns from complex data, Danielle S. Bitterman, MD, told Targeted OncologyTM. For many applications, deep learning methods have better performance, but come at a trade-off of being black boxes, meaning it is difficult for humans to understand how they arrive at their decision. This creates new challenges for safety, trust, and reliability.

Utilizing AI is important as the capacity the human brain must process information is limited, causing an urgent need for the implementation of alternative strategies to process big data. With machine learning and AI, clinicians can obtain increased availability of data, and boost the augmentation of storage and computing power.

As of October 5, 2022, the FDA had approved 521 medical devices which utilize AI and/or machine learning, with the majority of devices in the radiology space.2

Primarily, where it is being more robustly developed and, in some cases, now, at the point of receiving FDA approval and starting to be applied and utilized in the hospitals and clinics, is in the cancer diagnostic space. This includes algorithms to help improve the efficiency and accuracy of, for example, interpreting mammograms. Radiology services, and to some extent, pathology, are where some of these machine learning and deep learning algorithms and AI models are being used, said Haddad.

In radiology, there are many applications of AI, including deep learning algorithms to analyze imaging data that is obtained during routine cancer care. According to Haddad, some of this can include evaluating disease classification, detection, segmentation, characterization, and monitoring a patient with cancer.

According to radiation oncologist Matthew A. Manning, MD, AI is already a backbone of some clinical decision support tools.

The use of AI in oncology is rapidly increasing, and it has the potential to revolutionize cancer diagnosis, treatment, and research. It helps with driving automation In radiation oncology, there are different medical record platforms necessary for the practice that are often separate from the hospital medical record. Creating these interfaces that allow reductions in the redundancy of work for both clinicians and administrative staff is important. Tools using AI and business intelligence are accelerating our efforts in radiation oncology, Manning, former chief of Oncology at Cone Health, told Targeted OncologyTM, in an interview.

Through combining AI human power, mamography screening has been improved for patients with breast cancer. Additionally, deep learning models were trained to classify and detect disease subtypes based on images and genetic data.

To find lung nodules or brain metastases on MRI readouts, AI uses bounding boxes to locate a lesion or object of interest and classify them. Detection using AI supports physicians when they read medical images.

Segmentation involves recognizing these lesions and accessing its volume and size to classify individual pixels based on organ or lesions. Examples of this are brain gliomas as they require quantitative metrics for their management, risk stratification and prognostication.

Deep learning methods have been applied to medical images to determine a large number of features that are undetectable by humans.3 An example of using AI to characteroze tumor come from the study of radiomics, which studies combines disease features with clinicogenomic information. This methods can inform models that successfully predict treatment response and/or adverse effects from cancer treatments.

Radiomics can be applied to a variety of cancer types, including liver, brain, and lung tumors. According to research in Future Science OA1, deep learning using radiomic features from brain MRI also can help differentiate brain gliomas from brain metastasis with similar performance to trained neuroradiologists.

Utilizing AI can dramatically change the ways patients with cancer are monitored. It can detect a multitude of discriminative features in imaging that by humans, are unreadable. One process that is normally performed by radiologists and that plays a major role in determining patient outcomes is measuring how tumors react to cancer treatment.4 However, the process is known to be labor-intensive, subjective, and prone to inconsistency.

To try and alleviate this frequent problem, researchers developed a deep learning-based method that is able to automatically annotate tumors in patients with cancer. Using a small study, researchers from Johns Hopkins Kimmel Comprehensive Cancer Center and its Bloomberg~Kimmel Institute for Cancer Immunotherapy successfully trained a machine learning algorithm to predict which patients with melanoma would respond to treatment and which would not respond. This open-source program, DeepTCR, was valuable as a predictive tool and helped researchers understand the biological mechanisms and responses to immunotherapy.

This program can also help clinicians monitor patients by stratifying patient outcomes, identifying predictive features, and helping them manage patients with the best treatments.

Proper screening for early diagnosis and treatment is a big factor when combating cancer. In the current space, AI makes obtaining results easier and more convenient.

One of the important things to think about AI or the capabilities of AI in oncology is the ability to see what the human eye and the human mind cannot see or interpret today. It is gathering all these different data points and developing or recognizing patterns in the data to help with interpretation. This can augment some of the accuracy for cancer diagnostics. added Haddad.

AI may also provide faster, more accurate results, especially in breast cancer screening. While the incorporation of AI into screening methods is a relatively new and emerging field, it is promising in the early detection of breast cancer, thus resulting in a better prognosis of the condition. For patients with breast cancer, a mammography is the most popular method of breast cancer screening.

Another example of AI in the current treatment landscape for patients with colon cancer is the colonoscopy. Colon cancer screening utilizes a camera to give the gastroenterologist the ability to see inside the colon and bowel. By taking those images, and applying machine learning, deep learning neural network techniques, there is an ability to develop algorithms to not only help to better detect polyps or print precancerous lesions, but also to discern from early-stage or advanced cancers.

In addition, deep learning models can also help clinicians predict the future development of cancer and some AI applications are already being implemented in clinical practice. With further development, as well as refinement of the already created devices, AI will be further applied.

In terms of improving cancer screening, AI has been applied in radiology to analyze and identify tumors on scans. In the current state, AI is making its way into computer-assisted detection on diagnostic films. Looking at a chest CT, trying to find a small nodule, we see that AI is very powerful at finding spots that maybe the human eye may miss. In terms of radiation oncology, we anticipate AI will be very useful ultimately in the setting of clinical decision support, said Manning.

For oncologists, the emergence of the COVID-19 pandemic and time spent working on clinical documentation has only heightened the feeling of burnout. However, Haddad notes that a potential solution to help mitigate feelings of burnout is the development and integration of precision technologies, including AI, as they can help reduce the large amount of workload and increase productivity.

There are challenges with workforce shortages as a consequence of the COVID-19 pandemic with a lot of burnout at unprecedented rates. Thinking about how artificial intelligence can help make [clinicians] jobs easier and make them more efficient. There are smart hospitals, smart clinic rooms, where just from the ingestion of voice, conversations can be translated to the physician and patient into clinical documentation to help reduce the time that clinicians need to be spending doing the tedious work that we know contributes to burnout, including doing the clinical documentation, prior authorizations, order sets, etc, said Haddad.

Numerous studies have been published regarding the potential of machine learning and AI for the prognostication of cancer. Results from these trials have suggested that the performance and productivity of oncologists can be improved with the use of AI.5

An example is with the prediction of recurrences and overall survival. Deep learning can enhance precision medicine and improve clinical decisions, and with this, oncologists may feel emotional satisfaction, reduced depersonalization, and increased professional efficacy. This leaves clinicians with the potential of increased job satisfaction and a reduced feeling of burnout.

Research also has highlighted that the intense workload contributes to occupational stress. This in turn has a negative effect on the quality of care that is offered to patients.

Additionally, it has been reported that administrative tasks, such as collecting clinical, billing, or insurance information, contribute to the workload faced by clinicians, and this leads to a significantly limited time for direct face-to-face interaction between patients and their physicians. Thus, AI has helped significantly reduce this administrative burden.

Overall, if clinicians can do less of the tedious clerical work and spend more time doing the things they were trained to do, like having time with the patient, their overall outlook on their job will be more positive.

AI will help to see that joy restored and to have a better experience for our patient. I believe that AI is going to transform most aspects of medicine over the coming years. Cancer care is extremely complex and generates huge amounts of varied digital data which can be tapped into by computational methods. Lower-level tasks, such as scheduling and triaging patient messages will become increasingly automated. I think we will increasingly see clinical decision-support applications providing diagnostic and treatment recommendations to physicians. AI may also be able to generate novel insights that change our overall approach to managing cancers, said Haddad.

While there have been increasing amounts of updates and developments for AI in the oncology space, according to Bitterman, a large gap remains between AI research and what is already being used.

To bridge this gap, Bitterman notes that there must be further understanding by both clinicians and patients regarding how to properly interact with AI applications, and best optimize interactions for safety, reliability, and trust.

Digital data is still very siloed within institutions, and so regulatory changes are going to be needed before we can realize the full value of AI. We also need better standards and methods to assess bias and generalizability of AI systems to make sure that advances in AI dont leave minority populations behind and worsen health inequities.

Additionally, there is a concern that patients voices are being left out of the AI conversation. According to Bitterman, AI applications are developed by using patients data, and as a result, this will likely transform their care journey. To further improve the use of AI for patients with cancer, it is key to get the opinions from patients.

With further research, it should be possible to overcome the current challenges being faced with AI to continue to improve its use, make AI more popular, and improve the overall quality-of-life for patients with cancer.

We need to engage patients at every step of the AI development/implementation lifecycle, and make sure that we are developing applications that are patient-centered and prioritize trust, safety, and patients lived experiences, concluded Bitterman.

View original post here:

Current Applications of Artificial Intelligence in Oncology - Targeted Oncology

Posted in Artificial Intelligence | Comments Off on Current Applications of Artificial Intelligence in Oncology – Targeted Oncology

The Case for Realistic Action to Regulate Artificial Intelligence – The Information

Posted: at 2:53 pm

The overnight success of ChatGPT and GPT-4 marks a clear turning point for artificial intelligence. It also marks an inflection point for public discourse about the risks and benefits of AI for our society. Practitioners, policymakers and pundits alike have voiced loud concerns, ranging from fear of a potential flood of AI-generated disinformation to the existential risks of superhuman intelligence whose goals may not align with humanitys best interests.

The speed of AI advances is now measured in days and weeks, while government regulation generally takes years or even decadesto wit, we still dont have a federal privacy law after more than 20 years of public discussion. Record levels of lobbying by the tech industry have lined the pockets of Washington influence peddlers and ground the gears of technology regulation to a halt, even though distrust of big tech is as bipartisan an issue as they come.

Read the original post:

The Case for Realistic Action to Regulate Artificial Intelligence - The Information

Posted in Artificial Intelligence | Comments Off on The Case for Realistic Action to Regulate Artificial Intelligence – The Information

WEIRD AI: Understanding what nations include in their artificial intelligence plans – Brookings Institution

Posted: at 2:53 pm

In 2021 and 2022, the authors published a series of articles on how different countries are implementing their national artificial intelligence (AI) strategies. In these articles, we examined how different countries view AI and looked at their plans for evidence to support their goals. In the later series of papers, we examined who was winning and who was losing in the race to national AI governance, as well as the importance of people skills versus technology skills, and concluded with what the U.S. needs to do to become competitive in this domain.

Since these publications, several key developments have occurred in national AI governance and international collaborations. First, one of our key recommendations was that the U.S. and India create a partnership to work together on a joint national AI initiative. Our argument was as follows: India produces far more STEM graduates than the U.S., and the U.S. invests far more in technology infrastructure than India does. A U.S. -India partnership eclipses China in both dimensions and a successful partnership could allow the U.S. to quickly leapfrog China in all meaningful aspects of A.I. In early 2023, U.S. President Biden announced a formal partnership with India to do exactly what we recommended to counter the growing threat of China and its AI supremacy.

Second, as we observed in our prior paper, the U.S. federal government has invested in AI, but largely in a decentralized approach. We warned that this approach, while it may ultimately develop the best AI solution, requires a long ramp up and hence may not achieve all its priorities.

Finally, we warned that China is already in the lead on the achievement of its national AI goals and predicted that it would continue to surpass the U.S. and other countries. News has now come that China is planning on doubling its investment in AI by 2026, and that the majority of the investment will be in new hardware solutions. The U.S. State Department also is now reporting that China leads the U.S. in 37 out of 44 key areas of AI. In short, China has expanded its lead in most AI areas, while the U.S. is falling further and further behind.

Considering these developments, our current blog shifts findings away from national AI plan achievement to a more micro view of understanding the elements of the particular plans of the countries included in our research, and what drove their strategies. At a macro level, we also seek to understand if groups of like-minded countries, which we have grouped by cultural orientation, are taking the same or different approaches to AI policies. This builds upon our previous posts by seeking and identifying consistent themes across national AI plans from the perspective of underlying national characteristics.

In this blog, the countries that are part of our study include 34 nations that have produced public AI policies, as identified in our previous blog posts: Australia, Austria, Belgium, Canada, China, Czechia, Denmark, Estonia, Finland, France, Germany, India, Italy, Japan, South Korea, Lithuania, Luxembourg, Malta, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Qatar, Russia, Serbia, Singapore, Spain, Sweden, UAE, UK, Uruguay, and USA.

For each, we examine six key elements in these national AI plansdata management, algorithmic management, AI governance, research and development (R&D) capacity development, education capacity development, and public service reform capacity developmentas they provide insight into how individual countries approach AI deployment. In doing so, we examine commonalities between culturally similar nations which can lead to both higher and lower levels of investment in each area.

We do this by exploring similarities and differences through what is commonly referred to as the WEIRD framework, a typology of countries based on how Western, Educated, Industrialized, Rich, and Democratic they are. In 2010, the concept of WEIRD-ness originated with Joseph Henrich, a professor of human evolutionary biology at Harvard University. The framework describes a set of countries with a particular psychology, motivation, and behavior that can be differentiated from other countries. WEIRD is, therefore, one framework by which countries can be grouped and differentiated to determine if there are commonalities in their approaches to various issues based on similar decision-making processes developed through common national assumptions and biases.

Below are our definitions of each element of national AI plans, followed by where they fall along the WEIRD continuum.

Data management refers to how the country envisages capturing and using the data derived from AI. For example, the Singapore plan defines data management defines [a]s the nations custodian of personal and administrative data, the Government holds a data resource that many companies find valuable. The Government can help drive cross-sectoral data sharing and innovation by curating, cleaning, and providing the private sector with access to Government datasets.

Algorithmic management addresses the countrys awareness of algorithmic issues. For example, the German plan states that: [t]he Federal Government will assess how AI systems can be made transparent, predictable and verifiable so as to effectively prevent distortion, discrimination, manipulation and other forms of improper use, particularly when it comes to using algorithm-based prognosis and decision-making applications.

AI governance refers to the inclusivity, transparency and public trust in AI and the need for appropriate oversight. The language in the French plan asserts: [i]n a world marked by inequality, artificial intelligence should not end up reinforcing the problems of exclusion and the concentration of wealth and resources. With regards to AI, a policy of inclusion should thus fulfill a dual objective: ensuring that the development of this technology does not contribute to an increase in social and economic inequality; and using AI to help genuinely reduce these problems.

Overall, capacity development is the process of acquiring, updating and reskilling human, organizational and policy resources to adapt to technological innovation. We examine three types of capacity development R&D, Education, and Public Service Reform.

R&D capacity development focuses on government incentive programs for encouraging private sector investment in AI. For example, the Luxembourg plan states: [t]he Ministry of the Economy has allocated approximately 62M in 2018 for AI-related projects through R&D grants, while granting a total of approximately 27M in 2017 for projects based on this type of technology. The Luxembourg National Research Fund (FNR), for example, has increasingly invested in research projects that cover big data and AI-related topics in fields ranging from Parkinsons disease to autonomous and intelligent systems approximately 200M over the past five years.

Education capacity development focuses on learning in AI, at the post-secondary, vocational and secondary levels. For example, the Belgian plan states: Overall, while growing, the AI offering in Belgium is limited and insufficiently visible. [W]hile university-college PXL is developing an AI bachelor programme, to date, no full AI Master or Bachelor programmes exist.

Public service reform capacity development focuses on applying AI to citizen-facing or supporting services. For example, the Finnish plan states: Finlands strengths in piloting [AI projects] include a limited and harmonised market, neutrality, abundant technology resources and support for legislation. Promoting an experimentation culture in public administration has brought added agility to the sectors development activities.

In the next step of our analysis, we identify the level of each country and then group countries by their WEIRD-ness. Western uses the World Population Reviews definition of the Latin West, and is defined by being in or out of this group, which is a group of countries sharing a common linguistic and cultural background, centered on Western Europe and its post-colonial footprint. Educated is based on the mean years of schooling in the UN Human Development Index, where 12 years (high school graduate) is considered the dividing point between high and low education. Industrialized adopts the World Bank industry value added of GDP, where a median value of $3500 USD per capita of value added separates high from low industrialization. Rich uses the Credit Suisse Global Wealth Databook mean wealth per adult measure, where $125k USD wealth is the median amongst countries. Democratic applies the Democracy Index of the Economist Intelligence Unit, which differentiates between shades of democratic and authoritarian regimes and where the midpoint of hybrid regimes (5.0 out of 10) is the dividing point between democratic and non-democratic. For example, Australia, Austria, and Canada are considered Western, while China, India and Korea are not. Germany, the U.S., and Estonia are seen as Educated, while Mexico, Uruguay and Spain are not. Canada, Denmark, and Luxemburg are considered Industrialized, while Uruguay, India and Serbia are not. Australia, France, and Luxembourg are determined to be Rich while China, Czechia and India are not. Finally, Sweden, the UK and Finland are found to be Democratic, while China, Qatar and Russia are not.

Figure 1 maps the 34 countries in our sample as follows. Results ranged from the pure WEIRD countries, including many Western European nations and some close trading partners and allies such as the United States, Canada, Australia, and New Zealand.Figure 1: Countries classified by WEIRD framework[1]

By comparing each grouping of countries with the presence or absence of our six data elements (data management, algorithmic management, AI governance, and R&D capability development), we can understand how each country views AI alone and within its particular grouping. For example, wEIRD Japan and Korea are high in all areas except for western and both invest highly in R&D capacity development but not education capacity development.

The methodology used for this blog was Qualitative Configuration Analysis (QCA), which seeks to identify causal recipes of conditions related to the occurrence of an outcome in a set of cases. In QCA, each case is viewed as a configuration of conditions (such as the five elements of WEIRD-ness) where each condition does not have a unique impact on the outcome (an element of AI strategy), but rather acts in combination with all other conditions. Application of QCA can provide several configurations for each outcome, including identifying core conditions that are vital for the outcome and peripheral conditions that are less important. The analysis for each plan element is described below.

Data management has three different configurations of countries that have highly developed plans. In the first configuration, for WeIRD countriesthose that are Western, Industrialized, Rich, and Democratic (but not Educated; e.g., France, Italy, Portugal, and Spain)being Western was the best predictor of having data management as part of their AI plan, and the other components were of much less importance. Of interest, not being Educated was also core, making it more likely that these countries would have data management as part of their plan. This would suggest that these countries recognize that they need to catch up on data management and have put plans in place that exploit their western ties to do so.

In the second configuration, which features WEIrD Czechia, Estonia, Lithuania, and Poland, being Democratic was the core and hence most important predictor and Western, Educated, and Industrialized were peripheral and hence less important. Interestingly, not being Rich made it more likely to have this included. This would suggest that these countries have developed data management plans efficiently, again leveraging their democratic allies to do so.

In the third and final configuration, which includes the WeirD countries of Mexico, Serbia, Uruguay, and weirD India, the only element whose presence mattered was the level of Democracy. That these countries were able to do so in low wealth, education, and industrialization contexts demonstrates the importance of investment in AI data management as a low-cost intervention in building AI policy.

Taken together, there are many commonalities, but a country being Western and/or Democratic were the best predictors of a country having a data governance strategy in its plan. In countries that are Western or Democratic, there is often a great deal of public pressure (and worry) about data governance, and we suspect these countries included data governance to satisfy the demands of their populace.

We also examined what conditions led to the absence of a highly developed data management plan. There were two configurations that had consistently low development of data management. In the first configuration, which features wEIrd Russian and UAE and weIrd China, being neither Rich nor Democratic were core conditions. In the second configuration, which includes wEIRD Japan and Korea, core conditions were being not Western but highly Educated. Common across both configurations was that all countries were Industrialized but not Western. This would suggest that data management is more a concern of western countries than non-western countries, whether they are democratic or not.

However, we also found that the largest grouping of countriesthe 15 WEIRD countries in the samplewere not represented, falling neither in the high or low configurations. We believe that this is due to there being multiple different paths for AI policy development and hence they do not all stress data governance and management. For example, Australia, the UK, and the US have strong data governance, while Canada, Germany and Sweden do not. Future investigation is needed to differentiate between the WEIRDest countries.

For algorithmic management, except for WeirD Mexico, Serbia, and Uruguay, there was no discernable pattern in terms of which countries included an acknowledgment of the need and value of algorithmic management. We had suspected that more WEIRD countries would be sensitive to this, but our data did not support this belief.

We examined the low outcomes for algorithmic management and found two configurations. The first was wEIRD Japan and Korea and weIRD Singapore, where the core conditions were being not Western but Rich and Democratic. The second was wEIrd Russian and UAE and weIrd China, where the core elements were not Rich and not Democratic. Common across the two configurations with six countries was being not Western but Industrialized. Again, this suggests that algorithmic management is more a concern of western nations than non-western ones.

For AI governance, we again found that, except for WeirD Mexico, Serbia, and Uruguay, there was no discernable pattern for which countries included this in their plans and which countries did not. We believed that AI governance and algorithmic management to be more advanced in WEIRD nations and hence this was an unexpected result.

We examined the low outcomes for AI governance and found three different configurations. The first was wEIRD Japan and Korea and weIRD Singapore, where the core conditions were being not Western but Rich and Democratic. The second was wEIrd Russian and UAE, where the core elements were not Western but Educated. The third was weirD India, where the core elements were being not Western but Democratic. Common across the three configurations with six countries was not being of western classification. Again, this suggests that AI governance is more a concern of western nations than nonwestern ones.

There was a much clearer picture of high R&D development, where we found four configurations. The first configuration was the 15 WEIRD countries plus the WEIrD onesCzechia, Estonia, Lithuania, Poland. For the latter, while they are not some of the richer countries, they still manage to invest heavily in developing their R&D.

The second configuration included WeirD Mexico, Serbia, Uruguay, and weirD India. Like data governance, these countries were joined by their generally democratic nature but lower levels of education, industrialization, and wealth.

Conversely, the third configuration included the non-western, non-democratic nations such as weIRd Qatar and weIrd China. This would indicate that capability development is of primary importance for such nations at the expense of other policy elements. The implication is that investment in application of AI is much more important to these nations than its governance.

Finally, the fourth configuration included the non-western but democratic nations such as wEIRD Japan, Korea, and weIRD Singapore. This would indicate that the East, whether democratic or not, is as equally focused on capability development and R&D investment as the West.

We did not find any consistent configurations for low R&D development across the 34 nations.

For high education capacity development, we found two configurations, both with Western but not Rich core conditions. The first includes WEIrD Czechia, Estonia, Lithuania, and Poland while the second includes WeirD Mexico, Serbia, and Uruguay. Common conditions for these seven nations were being Western and Democratic, but not Rich, while the former countries were Educated and Industrialized, while the latter were not. These former eastern-bloc and colonial nations appear to be focusing on creating educational opportunities to catch up with other nations in the AI sphere.

Conversely, we found three configurations of low education capacity development. The first includes wEIRD Japan and Korea and weIRD Singapore, representing the non-Western but Industrialized, Rich, and Democratic nations. The second was weIRd Qatar, not Western or Democratic but Rich and Industrialized, while the third was wEIrd Russia and UAE. The last was weirD India, being Democratic but low in all other areas. The common factor across these countries was being non-western, demonstrating that educational investment to improve AI outcomes is a primarily western phenomenon, irrespective of other plan elements.

We did not find any consistent configurations for high public service reform capacity development, but we did find three configurations for low investment in such plans. The first includes wEIRD Japan and Korea, the second was weIRd Qatar, and the last was weirD India. This common core factor across these three configurations was that they were not western countries, further highlighting the different approaches taken by western and nonwestern countries.

Overall, we expected more commonality in which countries included certain elements, and the fragmented nature of our results likely reflects a very early stage of AI adoption and countries simply trying to figure out what to do. We believe that, over time, WEIRD countries will start to converge on what is important and those insights will be reflected in their national plans.

There is one other message that our results pointed out: the West and the East are taking very different approaches to AI development in their plans. The East is almost exclusively focused on building up its R&D capacity and is largely ignoring the traditional guardrails of technology management (e.g., data governance, data management, education, public service reform). By contrast, the West is almost exclusively focused on ensuring that these guardrails are in place and is spending relatively less effort on building the R&D capacity that is essential to AI development. This is perhaps the reason why many Western technology leaders are calling for a six-month pause on AI development, as that pause could allow suitable guardrails to be put in place. However, we are extremely doubtful that countries like China will see the wisdom in taking a six-month pause and will likely use the pause to create even more space between their R&D capacity and the rest of the world. This all gas, no brakes Eastern philosophy has the potential to cause great global harm but will undeniably increase their domination in this area. We have little doubt about the need for suitable guardrails in AI development but are also equally convinced that a six-month pause is unlikely to be honored by China. Because of Chinas lead, the only prudent strategy is to build the guardrails while continuing to engage in AI development. Otherwise, the West will continue to fall further behind, resulting in the development of a great set of guardrails but with nothing of value to guard.

[1] A capital letter denotes being high in an element of WEIRD-ness while a lowercase letter denotes being low in that element. For example, W means western while w means not western. (Back to top)

Continue reading here:

WEIRD AI: Understanding what nations include in their artificial intelligence plans - Brookings Institution

Posted in Artificial Intelligence | Comments Off on WEIRD AI: Understanding what nations include in their artificial intelligence plans – Brookings Institution

The AI Arms Race: Investing in the Future of Artificial Intelligence … – The Motley Fool

Posted: at 2:53 pm

The release of ChatGPT, a generative chatbot developed by the company OpenAI, caused quite a stir. It moved the artificial intelligence (AI) conversation from the tech world to the mainstream seemingly overnight. AI is making headlines, and investors wonder which companies have the upper hand in the arms race.

ChatGPT is innovative because of its ability to communicate using natural language processing and because it is generative, capable of producing various types of content. You've probably experienced basic customer service bots that can give canned responses to limited queries. But generative chatbots can develop original responses. Its capabilities include answering questions, assisting with composition, summarizing content, and more. This is why Microsoft (MSFT 2.80%) has made a multiyear and multibillion-dollar investment in OpenAI.

The reason is simple. Microsoft is eying the vast search advertising market currently dominated by Alphabet's (GOOG 4.29%) (GOOGL 4.33%) Google Search, as shown below. It is using ChatGPT tech to get there.

The chasm between Google Search and Microsoft Bing is vast, so Microsoft has everything to gain. After all, Google Search brought in $160 billion in revenue for Alphabet in 2022, 80% of Microsoft's total fiscal 2022 sales.

Bing isn't Microsoft's only AI initiative. The company's comprehensive cybersecurity offerings leverage AI to fight against bad actors, and Microsoft CoPilot embeds into Microsoft Office apps to generate presentations, draft emails, and summarize texts. CEO Satya Nadella appears to be all-in on AI.

Microsoft's results for the fiscal third quarter of 2023 are simply outstanding: $52.9 billion in sales on 7% growth. Operating income for the quarter was $22.4 billion (up 10%), with a fantastic 42% margin.

The stock does not come cheap, as you'd probably expect. It trades near its 52-week high, and its price-to-earnings (P/E) ratio over 32 is higher than its single-year and three-year averages. Because of this, it might behoove new Microsoft investors to keep an eye out for a pullback in the stock price.

Microsoft is making dynamic moves in AI, but don't write off Alphabet just yet.

Some were quick to declare Microsoft the AI leader with its investment in ChatGPT, but this is like declaring a winner after the first inning of a baseball game. For years Alphabet has developed its own AI tools, including its answer to ChatGPT, named Bard. I tested Bard to inquire about Alpabet's other AI initiatives, like better translation services, search by photo, speech recognition, and others.

Google Lens is an excellent example of a practical application of AI. This allows the user to search from a cellphone camera. For example, users can translate a menu written in another language just by pointing their camera at it. Other applications include copying text or identifying unknown objects.

Alphabet just announced it is combining its Google Brain and DeepMind research programs into one entity called Google DeepMind. Both have been studying AI for years with some of the most brilliant minds in the business. The push from Microsoft might create urgency for Alphabet to kick these initiatives into high gear.

The slowing economy has investors concerned that Alphabet's advertising revenue will suffer. But first-quarter earnings announced on April 25 had many breathing a sigh of relief. Revenue rose to $69.8 billion on 3% growth (6% growth in constant currency). Operating income fell from 20.1 billion to $17.4 billion; however, $2.6 billion of the dip is due to one-time charges relating to layoffs and office space reductions. CEO Sundar Pichai expressed on the earnings conference call a commitment to reining in costs moving forward.

Alphabet's stock is more than 10% off its 52-week high and more than 25% below where it stood at the beginning of 2022.

GOOG data by YCharts

The company uses the share price reduction to benefit stockholders by aggressively repurchasing shares. A total of $73.8 billion of shares (5.5% of the current market cap) was retired in 2022 and Q1 2023. And another $70 billion was authorized with this earnings release.

The encouraging results do not mean the company is out of the woods. The economy is an ongoing headwind, YouTube sales were down in Q1 year over year, and Microsoft's search competition will be a test. But investors don't beat the market by buying only when everything is rosy. They need to look beyond current challenges to identify long-term potential. This potential is why Alphabet's beaten-down stock could make investors higher long-term profits.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Bradley Guichard has positions in Alphabet and Microsoft. The Motley Fool has positions in and recommends Alphabet and Microsoft. The Motley Fool has a disclosure policy.

Original post:

The AI Arms Race: Investing in the Future of Artificial Intelligence ... - The Motley Fool

Posted in Artificial Intelligence | Comments Off on The AI Arms Race: Investing in the Future of Artificial Intelligence … – The Motley Fool

Can Compute-In-Memory Bring New Benefits To Artificial … – SemiEngineering

Posted: at 2:53 pm

While CIM can speed up multiplication operations, it comes with added risk and complexity.

Compute-in-memory (CIM) is not necessarily an Artificial Intelligence (AI) solution; rather, it is a memory management solution. CIM could bring advantages to AI processing by speeding up the multiplication operation at the heart of AI model execution. However, for that to be successful, an AI processing system would need to be explicitly architected to use CIM. The change would entail a shift from all-digital design workflows into a mixed-signal approach which would require deep design expertise and specialized semiconductor fabrication processes.

Compute-in-memory eliminates weight coefficient buffers and streamlines the primitive multiply operations, striving for increased AI inference throughput. However, it does not perform neural network processing by itself. Other functions like input data streaming, sequencing, accumulation buffering, activation buffering, and layer organization may become more vital factors in overall performance as model hardware mapping unfolds and complexity increases more robust NPUs (Neural Processing Units) incorporate all those functions.

Fundamentally, compute-in-memory embeds a multiplier unit in a memory unit. A conventional digital multiplier takes two operands as digital words and produces a digital result, handling signing and scaling. Compute-in-memory uses a different approach, storing a weight coefficient as analog values in a specially designed transistor cell sub-array with rows and columns. The incoming digital data words enter the rows of the array, triggering analog voltage multiplies, then analog current summations occur along columns. An analog-to-digital converter creates the final digital word outputs from the summed analog values.

An individual memory cell can be straightforward in theory, such as these candidates:

Still, operating these cells presents mixed-signal challenges and a technology gap that is not closing anytime soon. So, why the intense interest in compute-in-memory for AI inference chips?

First, it can be fast. This is because analog multiplication happens quickly as part of the memory read cycle, transparent to the rest of the surrounding digital logic. It can also be lower power since fewer transistors switch at high frequencies. But there are some limitations from a system viewpoint. Additional steps needed for programming the analog values into the memory cells are a concern. Inaccuracy of the analog voltages, which may change over time, can inject bit errors into results showing up as detection errors or false alarm rates.

Aside from its analog nature, the biggest concern for compute-in-memory may be bit precision and AI training requirements. Researchers seem confident in 4-bit implementations; however, more training cycles must be run for reliable inference at low precision. Raising the precision to 8-bit lowers training demands. It also increases the complexity of the arrays and the analog-to-digital converter for each array, offsetting area and power savings and worsening the chance for bit errors in the presence of system noise.

So is compute-in-memory worthy of consideration? There likely are niche applications where it could speed up AI inference. A more critical question: is the added risk and complexity of compute-in-memory worth the effort? A well-conceived NPU strategy and implementation may nullify any advantage of moving to compute-in-memory. We can contrast the tradeoffs for AI inference in four areas: power/performance/area (PPA), flexibility, quantization, and memory technology.

PPA

Flexibility

Quantization

Memory Technology

The answer to the original question might be that designers should consider CIM only if other, more established AI inference platforms (NPUs) cannot meet their requirements. Since CIM is riskier, costlier, and harder to implement, many should only consider it a last-resort solution.

Expedera explores this topic in much more depth in a recent white paper, which can be found at: https://www.expedera.com/architectural-considerations-for-compute-in-memory-in-ai-inference/

Read this article:

Can Compute-In-Memory Bring New Benefits To Artificial ... - SemiEngineering

Posted in Artificial Intelligence | Comments Off on Can Compute-In-Memory Bring New Benefits To Artificial … – SemiEngineering

Page 12«..11121314..2030..»