Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence – JD Supra

[co-author: Jordan Rhodes]

As the world grapples with the impacts of the COVID-19 pandemic, we have become increasingly reliant on artificial intelligence (AI) technology. Experts have used AI to test potential treatments, diagnose individuals, and analyze other public health impacts. Even before the pandemic, businesses were increasingly turning to AI to improve efficiency and overall profit. Between 2015 and 2019, the adoption of AI technology by businesses grew more than 270 percent.

The growing reliance on AIand other machine learning systemsis to be expected considering the technologys ability to help streamline business processes and tackle difficult computational problems. But as weve discussed previously, the technology is hardly the neutral and infallible resource that so many view it to be, often sharing the same biases and flaws as the humans who create it.

Recent research continues to point out these potential flaws. One particularly important flaw is algorithm bias, which is the discriminatory treatment of individuals by a machine learning system. This treatment can come in various forms but often leads to the discrimination of one group of people based on specific categorical distinctions. The reason for this bias is simpler than you may think. Computer scientists have to teach an AI system how to respond to data. To do this, the technology is trained on datasetsdatasets that are both created and influenced by humans. As such, it is necessary to understand and account for potential sources of bias, both explicit and inherent, in the collection and creation of a dataset. Failure to do so can result in bias seeping into a dataset and ultimately into the results and determinations made by an AI system or product that utilizes that dataset. In other words, bias in, bias out.

Examining AI-driven hiring systems expose this flaw in action. An AI system can sift through hundreds, if not thousands, of rsums in short periods of time, evaluate candidates answers to written questions, and even conduct video interviews. However, when these AI hiring systems are trained on biased datasets, the output reflects that exact bias. For example, imagine a rsum-screening machine learning tool that is trained on a companys historical employee data (such as rsums collected from a companys previously hired candidates). This tool will inherit both the conscious and unconscious preferences of the hiring managers who previously made all of those selections. In other words, if a company historically hired predominantly white men to fill key leadership positions, the AI system will reflect that preferential bias for selecting white men for other similar leadership positions. As a result, such a system discriminates against women and people of color who may otherwise be qualified for these roles. Furthermore, it can embed a tendency to discriminate within the companys systems in a manner that makes it more difficult to identify and address. And as the countrys unemployment rate skyrockets in response to the pandemic, some have taken issue with companies relying on AI to make pivotal employment decisionslike reviewing employee surveys and evaluations to determine who to fire.

Congress has expressed specific concerns regarding the increase in AI dependency during the pandemic. In May, some members of Congress addressed a letter to House and Senate Leadership, urging that the next stimulus package include protections against federal funding of biased AI technology. If the letters recommendations are adopted, certain businesses that receive federal funding from the upcoming stimulus package will have to provide a statement certifying that bias tests were performed on any algorithms the business uses to automate or partially automate activities. Specifically, this testing requirement would apply to companies using AI to make employment and lending determinations. Although the proposals future is uncertain, companies invested in promoting equality do not have to wait for Congress to act.

In recent months, many companies have publicly announced initiatives to address how they can strive to reduce racial inequalities and disparities. For companies considering such initiatives, one potential actionable step could be a strategic review of the AI technology that a company utilizes. Such a review could include verifying whether the AI technology utilized by the company is bias-tested and consideration of the AI technologys overall potential for automated discriminatory effects given the context of its specific use.

Only time will reveal the large-scale impacts of AI on our society and whether weve used AI in a responsible manner. However, in many ways, the pandemic demonstrates that these concerns are only just beginning.

[View source.]

Read the original:

Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence - JD Supra

Ushering in a new era of work with RPA and AI – GCN.com

INDUSTRY INSIGHT

Government is ushering in a new era of work, using automation and artificial intelligence to help the federal workforce achieve higher levels of productivity and decision-making.

Over the past two years, agencies have focused on shifting the workforce to high-value work -- a key goal of the Presidents Management Agenda -- by taking advantage of robotic process automation and other technologies to reduce error, improve compliance and eliminate repetitive administrative tasks.

Although RPA is a useful IT capability that allows agencies to eliminate low-value, mundane, transactional work, it can only make simple decisions. By adding AI to the equation, agencies can accelerate the ability of RPA to complete a multitude of tasks at once. This can be particularly helpful when analyzing large swaths of data, enabling decision-makers to meet goals more efficiently and effectively.

The combination of these two technologies has delivered more real, tangible results that can be actively applied to digital solutions for civilian and defense agencies than either technology could do individually.

RPA, which provides software bots to automate high-volume, repeatable tasks within legacy processes and applications, has opened opportunities to massively transform government operations. Current RPA programs operating within agencies are achieving roughly five hours of workload elimination per employee, according to the RPA Program Playbook, published earlier this year by the Federal RPA Community of Practice.

The Playbook continues: If the government deployed RPA at scale and achieved only 20 hours of workload elimination per employee, the net capacity gained would be worth $3 billion -- and that is only scratching the surface.

RPA, a building block for AI

Many agencies across the federal government have initiated RPA programs to automate tasks of varying complexity across multiple functional areas including finance, acquisition, IT, human resources, security and mission assurance. Popular uses of RPA include data entry, data reconciliation, spreadsheet manipulation, systems integration, automated data reporting, analytics, customer outreach and communications.

In 2019, the Food and Drug Administrations Center for Drug Evaluation and Research reported it had seven RPA projects in development, including one that automated drug intake forms and freed up the pharmaceutical and medical staff for the agencys core science mission. Last year, the Defense Logistics Agency completed a first-of-its-kind proof of concept in government that allowed unattended bots to work around the clock. DLA recently reported it has saved more than 200,000 labor-hours with the 82 RPA bots it launched in the past year, CIO George Duchak said during an AFCEA DC virtual event in May. In fact, using basic bots is the first step in the agencys AI journey, he said.

RPA is transformative because it establishes the building blocks for AI in terms of IT infrastructure and task standardization, the Playbook notes. If RPA is effectively deployed, machine learning (ML) and intelligent automation are only a few, manageable steps away.

RPA/AI use case: Transaction matching, fraud prevention

Applying AI/ML along with RPA provides opportunities for financial management offices to address areas such as transaction matching, fraud prevention and anomaly detection.

For example, large financial management offices struggle to resolve and match hundreds of thousands of transactions, many of which require significant manual effort. An RPA solution can automatically access data from various financial management systems and process transactions without human intervention, but it will fall short when data variances exceed tolerances for matching data and documents and will result in unmatched transactions. The addition of an AI/ML capability would accelerate the handling and processing of data and associated actions, including matching financial transactions or identifying fraud.

If there is an error in the data on a particular transaction, for example, an automated system might not be able to match the transactions with confidence. However, a ML platform could train models to rapidly examine the correlation between historical and current transactions. It could help identify potential matching or irregular behavior based on transactions that might have erroneously mismatched fields such as different dates or name variants. This capability would accelerate the review process and preserve humans for the most important activities.

To be effective, a ML platform must adhere to open standards and offer an extensible set of tools that enable end-to-end data science and RPA development in a rapid, scalable and sustainable manner. This will allow agencies to innovate further as their data maturity and AI efforts improve.

As the power of AI grows with each new use case, so too do the misconceptions surrounding the technology, particularly the erroneous idea that AI will replace human workers, impacting their livelihood as the technology overtakes their job.

RPA has proved it can automate the manual, repetitive, low-value tasks that often drive worker dissatisfaction. The use of AI should enhance workforce efficiency by deferring boring, time-consuming tasks to computers, allowing humans to then make better, more informed decisions based on proven, trusted data that they did not have to take the time to analyze. By implementing the proper change management and communication strategy, agencies can help their employees see RPA and AI as a path to more meaningful, mission-aligned work.

About the Author

Vimesh Patel is the chief technology advisor at World Wide Technology.

Read more:

Ushering in a new era of work with RPA and AI - GCN.com

USPTO Releases Benchmark Study on the Artificial Intelligence Patent Landscape – IPWatchdog.com

The diffusion trend for artificial intelligence inventor-patentees started at 1% in 1976 and increased to 25% in 2018, which means that 25% of all unique inventor-patentees in 2018 used AI technologies in their granted patents.

On October 27, the United States Patent and Trademark Office (USPTO) released a report titled Inventing AI: Tracing the diffusion of artificial intelligence with U.S. patents. The study showed that artificial intelligence (AI) patent applications increased by more than 100% between 2002 and 2018, from 30,000 to over 60,000, and the overall share of patent applications containing AI subject matter rose from 9% to nearly 16%.

According to the U.S. National Institute of Standards and Technology (NIST), AI technologies and systems comprise software and/or hardware that can learn to solve complex problems, make predictions or undertake tasks that require human-like sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action. However, for purposes of patent applications and grants, the USPTO defines AI as including one or more of eight component technologies: vision, planning/control, knowledge processing, speech, AI hardware, evolutionary computation, natural language processing, and machine learning. Between the years of 1990 and 2018, the largest AI technological areas were planning/control and knowledge processing, which include inventions directed to controlling systems, developing plans, and processing information. In addition, the study showed that patent applications in the areas of machine learning and computer visions have shown a pronounced increase since 2012.

The study explained that, since 1976, AI technologies have been diffusing across a large percentage of technology subclasses, spreading from 10% in 1976 to more than 42% of all patent technology subclasses in 2018. The study identified three distinct clusters with different diffusion rates in order from the fastest to the slowest growing: 1.) knowledge processing and planning/control, 2.) vision, machine learning, and AI hardware, 3.) revolutionary computing, speech, and natural language processing. The study noted that the clusters suggest a form of technological interdependence among the AI component technologies, but also noted that additional research is required to understand the factors behind the patterns.

The study also identified the growth in the number of AI inventors as an indicator of diffusion. In particular, the diffusion trend for inventor-patentees started at 1% in 1976 and increased to 25% in 2018, which means that 25% of all unique inventor-patentees in 2018 used AI technologies in their granted patents.

Noting that AI requires specialized knowledge, the study pointed out that diffusion is generally slower and can be restricted to a narrow set of organizations in areas where skilled labor and technical information are harder to obtain, such as in AI. The study identified the top 30 U.S. companies that held 29% of all AI patents granted from 1976 to 2018. The leading company was IBM Corp. with 46,752 patents, followed by Microsoft Corp. with 22,067 patents and Google Inc. with 10,928 patents.

With respect to geographic diffusion of AI, the study indicated that, between 1976 and 2000, AI inventor-patentees tended to be concentrated in larger cities or established technology hubs, such as Silicon Valley, California, because those regions were home to companies with employees having the specialized knowledge required to understand AI technologies. Since 2001, AI inventor-patentees have diffused widely across the U.S. For example, Maine and South Carolina are active in digital data processing and data processing adapted for businesses, Oregon is active in fitness training and equipment, and Montana is active in inventions analyzing the chemical and physical properties of materials. The study also showed that the American Midwest is adopting AI technology, but at a slower rate. For example, Wisconsin leads in medical instruments and processes for diagnosis, surgery, and identification and Iowa, Kansas, Missouri, Nebraska, and Ohio are contributing to AI technologies relating to telephonic communications. Further, inventor-patentees in North Dakota are actively contributing to AI technologies as applied to agriculture.

The USPTO noted that the study suggests that AI has the potential to be as revolutionary as electricity or the semiconductor and depends, at least in part, on the ability of innovators and firms to successfully incorporate AI inventions into existing and new products, processes, and services.

The report results were obtained from a machine learning AI algorithm that determined the volume, nature, and evolution of AI and its component technologies as contained in U.S. patents from 1976 through 2018. This methodology improved the accuracy of identifying AI patents by better capturing the diffusion of AI across technology, companies, inventor-patentees, and geography.

Rebecca Tapscott is an intellectual property attorney who has joined IPWatchdog as our Staff Writer. She received her Bachelor of Science degree in chemistry from the University of Central Florida and received her Juris Doctorate in 2002 from the George Mason School of Law in Arlington, VA.

Prior to joining IPWatchdog, Rebecca has worked as a senior associate attorney for the Bilicki Law Firm and Diederiks & Whitelaw, PLC. Her practice has involved intellectual property litigation, the preparation and prosecution of patent applications in the chemical, mechanical arts, and electrical arts, strategic alliance and development agreements, and trademark prosecution and opposition matters. In addition, she is admitted to the Virginia State Bar and is a registered patent attorney with the United States Patent and Trademark Office. She is also a member of the American Bar Association and the American Intellectual Property Law Association.

Continue reading here:

USPTO Releases Benchmark Study on the Artificial Intelligence Patent Landscape - IPWatchdog.com

How AI is helping reopen factory floors safely in a pandemic – ThePrint

Text Size:A- A+

One of the biggest challenges post coronavirus lockdown has been to balance lives and livelihoods. How do factories and workplaces re-open while ensuring the safety of their employees, remains the pertinent question. As employers around the globe grapple with this, it has become evidently clear that the solution cannot be one size fits all. The way out needs a technology that could be adapted and fine-tuned to every factory floor, airport lounge and classroom. At the same time, it needs to be broad-based to meet international health and safety parameters.

In other words, the answer lies in adapting Artificial Intelligence and Internet of Things (IoT) technologies.

Also read: Big Data and AI tools of the Fourth Industrial Revolution that can help beat Covid-19

For my team at BLP Industry.AI, the first step was to understand the practical difficulties that floor managers and supervisors in factories were facing, such as the inability to monitor their employees and if they were wearing the required safety gear constantly or not. Another difficulty was in ensuring social distancing not just among employees but visitors as well. Going through the inquiries we received from about 40 companies, both domestic and multinational, we learnt that some of them wanted their employees to submit a self-declaration document every day, which included questions on their health and whether they had visited a containment zone recently. Monitoring these daily self-declarations was proving to be cumbersome.

To ensure the safety of employees, an early warning system is necessary, so that anyone running high fever can be taken off the factory or office floor immediately. But there is no way companies can regularly monitor the temperature of every employee. Also, to prevent the spread of Covid-19, contact tracing is necessary, which again is a difficult task for employers. In addition, the companies would want to protect their supply chain, in particular the micro, small and medium scale (MSME) suppliers. Now, the employers wanted to achieve all of these and in a cost-effective manner.

We focussed on developing AI and IoT-based technology solutions for industry, educational institutions, hospitals, hotels, airports, etc., and came up with three broad ones that could be adapted based on the specific needs of different industries.

Also read: Geo-mapping, CCTV cameras, AI how Telangana Police is using tech to enforce Covid safety

We are deeply grateful to our readers & viewers for their time, trust and subscriptions.

Quality journalism is expensive and needs readers to pay for it. Your support will define our work and ThePrints future.

SUBSCRIBE NOW

The first product, Trust AI, is a cloud-based solution that uses a combination of visual analytics, mathematical, and neural network models to analyse video feed. Any existing camera is connected to the cloud or the companys server, which scans the feed in real time and immediately sends out an alert when a breach occurs.No new investment in CCTV cameras is required.

Alerts are sent to the safety officer or supervisors if safety gear usage (masks, helmets, safety jackets, etc.) or social distancing guidelines are not followed.In addition, the tool monitors hotspots in the factory and frequency of breaches so that managers can change the workforce on the floor.

Besides Covid-19, the technology can also be used in detecting fires, increasing workforce productivity, and reducing manufacturing defects. Institutions are also reducing security costs by replacing guards with computer vision models.

Also read: NIC awaits diverse enough pool of chest X-rays to develop its AI-based Covid detection model

The second product, Us Pro,is a cellphone technology meant only for enterprises and industries. It provides real time alerts on the employees cell phone when social distancing is breached. The phone sends out an instant alert to its owner, thereby providing an active defence system. Once an alert is triggered, it is recorded on the back-end AI application. This application, with relevant reports and dashboard, is only accessible to the health or safety officer of that particular factory or office.Privacy was a major factor for all the companies and therefore a number of steps were taken such as limiting the use of technology only while the employee is at the factory or office, and keeping all communications between the phones encrypted. As a result, everyone remains anonymous.

The technology also tracks the employees temperature every few hours, and alerts the safety officer if there are signs of high fever.In case the person tests Covid positive, the AI application, using contact tracing, determines who among others has a higher probability of falling ill or contracting the virus.

Also read: How to make safe decisions when you cant plan much in the age of Covid

The third solution, Spot AI,involves wearable devices such as a wrist band or an ID card, which vibrates when a worker breaches social distancing and geo-fencing norms.

The technology platform is normally used to drive operational and workforce productivity by locating and coordinating human, machine, and material flow on a factory floor.Increasingly, a number of US universities and Indian schools are evaluating the use of this technology as a non-intrusive way to create social distancing awareness.

These AI-based tools have helped companies be in a position to proactively implement safety measures in the workplace.Now, the retail sector too can use these applications to ensure that customers are adhering to safety norms such as wearing masks when in the store, or to keep a check on travellers at airports.

In crowded spaces such as offices and commercial buildings, these tools will help in protecting a large number of people if someone shows the symptoms of Covid-19. In most schools, cellphones are not allowed, so students can use wrist bands. The large Indian hotel chains that have properties across the country are evaluating a combination of these technologies. Hospitals, too, are evaluating the camera and wearable devices to keep their medical staff and patients safe.

Also read: Power consumption can explode with increasing use of artificial intelligence

There were several issues that the partner companies and Industry.AI grappled with while developing these solutions. But the three that stood out were technology, privacy, and implementational challenges.

Based on regular feedback from partner companies on how to adapt the application to real-world requirements, a number of technology challenges were overcome.

Privacy was a critical issue that was extensively debated, and mitigating steps were taken. One, the technologies are only being applied within the confines of a factory, university, hospital, or an office. All the data remains anonymous, and only the safety and health officer or the appointed administrators of that particular company have access to them.Two, the camera feed is not stored on any server. All alerts are deleted after a certain period of time as per the companys privacy policy.

As for implementation, the human resource managers and the culture of the company play a critical role. The implementation requires a good understanding of the workforces concerns and perspectives in order to ensure that the usage, scope and benefits of the technology are communicated clearly. Companies must create an environment of trust and convince their employees that it is in everyones best interest to adopt these preventive and safety measures.

Also read: Why we should not hype the hope for the Oxford-AstraZeneca Covid vaccine

It is heartening to see corporations develop partnerships and come together in a time of crisis. In this case, the partnership between a few large corporations, supported by an able technology company, and subsequent pilot programmes with other industries, resulted in scalable and frugal solutions to be tested and implemented in a short period of time by a number of factories from auto component major Lucas TVS to one of Indias largest electrical equipment companies, Havells. Moreover, some firms are now evaluating how these technologies can be used in a post-Covid world as well.

There are fears that the coronavirus pandemic may re-occur in waves, and the vaccines may not be ready for all virus mutations. This is forcing industries not only to partner with each other, but also with governments, to accelerate the adoption of next-generation technology. Given that supply chains have been broken and disrupted, we are seeing corporations accelerate their digital transformation plans to improve organisational productivity and decision-making.

As we move towards getting back to what we once considered normal, we will see the traditional paradigm being re-evaluated. And AI and Big Data will drive not only asset and employee productivity, but also the safety of the workforce.

Tejpreet Singh Chopra is the Founder and CEO of BLP Group, and former CEO of GE in India, Sri Lanka and Bangladesh. He is on the board of SRF, IEX, Anand Group, and AP Moller Maersks Pipavav port.He is a Young Global Leader of the World Economic Forum, and an Aspen Institute Fellow. Views are personal.

Subscribe to our channels on YouTube & Telegram

News media is in a crisis & only you can fix it

You are reading this because you value good, intelligent and objective journalism. We thank you for your time and your trust.

You also know that the news media is facing an unprecedented crisis. It is likely that you are also hearing of the brutal layoffs and pay-cuts hitting the industry. There are many reasons why the medias economics is broken. But a big one is that good people are not yet paying enough for good journalism.

We have a newsroom filled with talented young reporters. We also have the countrys most robust editing and fact-checking team, finest news photographers and video professionals. We are building Indias most ambitious and energetic news platform. And we arent even three yet.

At ThePrint, we invest in quality journalists. We pay them fairly and on time even in this difficult period. As you may have noticed, we do not flinch from spending whatever it takes to make sure our reporters reach where the story is. Our stellar coronavirus coverage is a good example. You can check some of it here.

This comes with a sizable cost. For us to continue bringing quality journalism, we need readers like you to pay for it. Because the advertising market is broken too.

If you think we deserve your support, do join us in this endeavour to strengthen fair, free, courageous, and questioning journalism, please click on the link below. Your support will define our journalism, and ThePrints future. It will take just a few seconds of your time.

Support Our Journalism

Link:

How AI is helping reopen factory floors safely in a pandemic - ThePrint

The AI of digitalization – Bits&Chips

Jan Bosch is a research center director, professor, consultant and angel investor in start-ups. You can contact him at jan@janbosch.com.

yesterday

This article is the last of four where I explore different dimensions of digital transformation. Earlier, I discussed business models, product upgrades and data exploitation. The fourth dimension is concerned with artificial intelligence. Similar to the other dimensions, our research showed that theres a clear evolution path that companies go through as they transition from being traditional companies to becoming digital ones (see the figure).

In the first stage, the company is still focused on data analytics. All data is processed for the sole purpose of human consumption and interpretation. At this point, things are all about dashboard, visualization and stakeholder views.

In the second stage, the first machine learning (ML) or deep learning (DL) models are starting to be developed and deployed. The training of the models is based on static data sets that have been assembled at one point in time and that dont evolve unless theres an explicit decision taken. When that happens, a new data set is assembled and used for training.

In the third stage, DevOps and MLOps are merged in the sense that theres a continuous retraining of models based on the most recent data. This data is no longer a data set, but rather a window over a data stream thats used for training and continuous re-training. Depending on the domain and the rate of change in the underlying data, the MLOps loop is either aligned with the DevOps loop or is executed more or less frequently. For instance, when using ML/DL for house price prediction in a real-estate market, its important to frequently retrain the model based on the most recent sales data as house prices change continuously.

Especially in the software-intensive embedded systems industry, as ML/DL models are deployed in each product instance, the next step tends to be the adoption of federated approaches. Rather than conducting all training centrally, the company adopts federated learning approaches where all product instances are involved in training and model updates are shared between product instances. This allows for localization and customization as specific regions and users may want the system to behave differently. Depending on the approach to federated learning, its feasible to allow for this. For example, different drivers want their adaptive cruise control system to behave in different ways. Some want to have the system take a more careful approach whereas others would like to see a more aggressive way of breaking and accelerating. Each product instance can, over time, adjust itself in response to driver feedback.

Finally, we reach the automated experimentation stage where the system fully autonomously experiments with its own behavior with the intent of improving certain success metrics. Whereas in earlier stages, humans conduct A/B experiments or similar and the humans are the ones coming up with the A and B alternatives, here its the system itself that generates alternatives, deploys, measures the effect and decides on next steps. Although the examples in this category are few and far between, weve been involved in, among others, cases where we use a system of this type to explore configuration parameter settings (most systems have thousands) in order to optimize the systems performance automatically.

Using AI is not a binary step, but a process that evolves over time

Concluding, digital transformation is a complex, multi-dimensional challenge. One of the dimensions is the adoption of AI/ML/DL. Using AI is not a binary step, but rather a process that evolves over time and proceeds through predefined steps. Deploying AI allows for automation of tasks that couldnt be automated earlier and for improving the outcomes of automated processes through smart, automated decisions. Once you have software, you can generate data. Once you have data, you can employ AI. Once you have AI, you can truly capitalize on the potential of digitalization.

In his course Speed, data and ecosystems, Jan Bosch provides you with a holistic framework that offers strategic guidance into how you successfully can identify and address the key challenges to excel in a software-driven world.

Read more here:

The AI of digitalization - Bits&Chips

What is Artificial Intelligence (AI) ? | IBM

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004paper(PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, "Computing Machinery and Intelligence" (PDF, 89.8 KB) (link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?" From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

Stuart Russell and Peter Norvig then proceeded to publish,Artificial Intelligence: A Modern Approach(link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

Ideal approach:

Alan Turings definition would have fallen under the category of systems that act like humans.

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted inGartners hype cycle(link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman noteshere(01:08:05) (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation aroundAI ethics, read morehere.

Read this article:

What is Artificial Intelligence (AI) ? | IBM

What is Artificial Intelligence (AI)? | Oracle

Despite AIs promise, many companies are not realizing the full potential of machine learning and other AI functions. Why? Ironically, it turns out that the issue is, in large part...people. Inefficient workflows can hold companies back from getting the full value of their AI implementations.

For example, data scientists can face challenges getting the resources and data they need to build machine learning models. They may have trouble collaborating with their teammates. And they have many different open source tools to manage, while application developers sometimes need to entirely recode models that data scientists develop before they can embed them into their applications.

With a growing list of open source AI tools, IT ends up spending more time supporting the data science teams by continuously updating their work environments. This issue is compounded by limited standardization across how data science teams like to work.

Finally, senior executives might not be able to visualize the full potential of their companys AI investments. Consequently, they dont lend enough sponsorship and resources to creating the collaborative and integrated ecosystem required for AI to be successful.

Read more from the original source:

What is Artificial Intelligence (AI)? | Oracle

Artificial intelligence | Definition, Examples, Types, Applications …

Top Questions

What is artificial intelligence?

Artificial intelligence (AI) is the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.Although there are no AIs that can perform the wide variety of tasks an ordinary human can do, some AIs can match humans in specific tasks.

Are artificial intelligence and machine learning the same?

No, artificial intelligence and machine learning are not the same, but they are closely related. Machine learning is the method to train a computer to learn from its inputs but without explicit programming for every circumstance. Machine learning helps a computer to achieve artificial intelligence.

What is the impact of artificial intelligence (AI) on society?

Artificial intelligences impact on society is widely debated. Many argue that AI improves the quality of everyday life by doing routine and even complicated tasks better than humans can, making life simpler, safer, and more efficient. Others argue that AI poses dangerous privacy risks, exacerbates racism by standardizing people, and costs workers their jobs, leading to greater unemployment. For more on the debate over artificial intelligence, visit ProCon.org.

Summary

artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasksas, for example, discovering proofs for mathematical theorems or playing chesswith great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasps instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligenceconspicuously absent in the case of Sphexmust include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and proceduresknown as rote learningis relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the add ed rule and so form the past tense of jump based on experience with similar verbs.

See more here:

Artificial intelligence | Definition, Examples, Types, Applications ...

Artificial intelligence | NIST

Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a wide range of innovations including autonomous vehicles and connected Internet of Things devices in our homes. AI is even contributing to the development of a brain-controlled robotic arm that can help a paralyzed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionizing and benefitting nearly all aspects of our society and economy everything from commerce and healthcare to transportation and cybersecurity. But the development and use of the new technologies it brings are not without technical challenges and risks.

NIST contributes to the research, standards and data required to realize the full promise of artificial intelligence (AI) as a tool that will enable American innovation, enhance economic security and improve our quality of life. Much of our work focuses on cultivating trust in the design, development, use and governance of artificial intelligence (AI) technologies and systems. We are doing this by:

NISTs AI efforts fall in several categories:

NISTs AI portfolio includes fundamental research into and development of AI technologies including software, hardware, architectures and human interaction and teaming vital for AI computational trust.

AI approaches are increasingly an essential component in new research. NIST scientists and engineers use various machine learning and AI tools to gain a deeper understanding of and insight into their research. At the same time, NIST laboratory experiences with AI are leading to a better understanding of AIs capabilities and limitations.

With a long history of devising and revising metrics, measurement tools, standards and test beds, NIST increasingly is focusing on the evaluation of technical characteristics of trustworthy AI.

NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance and governance are and increasingly will be a priority for the use and creation of trustworthy and responsible AI.

A fact sheet describes NIST's AI programs.

AI and Machine Learning (ML) is changing the way in which society addresses economic and national security challenges and opportunities. It is being used in genomics, image and video processing, materials, natural language processing, robotics, wireless spectrum monitoring and more. These technologies must be developed and used in a trustworthy and responsible manner.

While answers to the question of what makes an AI technology trustworthy may differ depending on whom you ask, there are certain key characteristics which support trustworthiness, including accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience) and mitigation of harmful bias. Principles such as transparency, fairness and accountability should be considered, especially during deployment and use. Trustworthy data, standards and evaluation, validation, and verification are critical for the successful deployment of AI technologies.

Delivering the needed measurements, standards and other tools is a primary focus for NISTs portfolio of AI efforts. It is an area in which NIST has special responsibilities and expertise. NIST relies heavily on stakeholder input, including via workshops, and issues most publications in draft for comment.

Read this article:

Artificial intelligence | NIST

Artificial Intelligence and Machine Learning – INL

INL is working to raise awareness of its work in AI/ML and encourage more researchers to use the available resources. We are making our research outcomes available through technical reports and a series of symposia focusing on how AI/ML is impacting science and engineering. The presentations focus on the latest advances in AI/ML, current applications in the nuclear industry, research opportunities and ongoing collaborations.

INL AI/ML Symposium 9.0 held Sept. 8, 2022

The 9.0 Symposium focused on natural language processing methods and applications.

Presentation Slides

Download Here

INL AI/ML Symposium 8.0 held May 26, 2022

The 8.0 symposium focused on computation infrastructure in artificial intelligence and machine learning.

Presentation Slides

Download Here

INL AI/ML Symposium 7.0 held February 10, 2022

The 7.0 Symposium focused on addressing data issues and using data for different science and engineering applications.

Presentation Slides

Download Here

INL AI/ML Symposium 6.0 held Oct. 14, 2021

The 6.0 Symposium focused on resilience, both in resilience applications of AI/ML or in resilience in AI/ML approaches.

Presentation Slides

Download Here

INL AI/ML Symposium 5.0 held June 8, 2021

The 5.0 Symposium focused on trustworthy AI/ML.

Presentation Slides

Download Here

INL AI/ML Symposium. 4.0 held February 9, 2021

The 4.0 Symposium focused on trustworthy AI/ML.

Presentation Slides

Download Here

INL AI/ML Symposium 3.0 held October 16, 2020

The 3.0 Symposium discussed how ML and AI are currently being applied in the industry, including opportunities for engagement and collaboration.

Presentation Slides

Download Here

INL AI/ML Symposium 2.0 held July 9, 2020

The 2.0 Symposium discussed how ML and AI are currently being applied in the industry, including opportunities for engagement and collaboration.

Presentation Slides

Download Here

INL AI/ML Symposium 1.0 held April 17, 2020

INL held the first symposium on AI/ML approaches and activities related to science and engineering in April. The 1.0 Symposium focused on internal-to-INL activities and capabilities. A total of eleven speakers discussed a variety of current topics and future applications. Over 200 INL staff participated in the symposium.

Topics included:

Presentation Slides

Download Here

Read the original post:

Artificial Intelligence and Machine Learning - INL

Artificial Intelligence (AI) – United States Department of State

');});jQuery('.entry-content p.watermarked > div.watermarked_image > img').each( function() {if ( jQuery(this).hasClass('alignnone') ) {jQuery(this).parent().addClass( 'alignnone' );}if ( jQuery(this).hasClass('alignleft') ) {jQuery(this).parent().addClass( 'alignleft' );}if ( jQuery(this).hasClass('alignright') ) {jQuery(this).parent().addClass( 'alignright' );}if ( jQuery(this).hasClass('size-medium') ) {jQuery(this).parent().addClass( 'has-size-medium' );}if ( jQuery(this).hasClass('aligncenter') ) {jQuery(this).parent().addClass( 'aligncenter' );jQuery(this).parent().children().wrapAll('');}});}});});

A global technology revolution is now underway. The worlds leading powers are racing to develop and deploy new technologies like artificial intelligence and quantum computing that could shape everything about our lives from whereweget energy, to how we do our jobs, to how wars are fought. We want America to maintain our scientific and technological edge, because its critical to us thriving in the 21st century economy.

Investments in AI have led to transformative advances now impacting our everyday lives, including mapping technologies, voice-assisted smart phones, handwriting recognition for mail delivery, financial trading, smart logistics, spam filtering, language translation, and more. AI advances are also providing great benefits to our social wellbeing in areas such as precision medicine, environmental sustainability, education, and public welfare.

The term artificial intelligence means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.

The Department of State focuses on AI because it is at the center of the global technological revolution; advances in AI technology present both great opportunities and challenges. The United States, along with our partners and allies, can both further our scientific and technological capabilities and promote democracy and human rights by working together to identify and seize the opportunities while meeting the challenges by promoting shared norms and agreements on the responsible use of AI.

Together with our allies and partners, the Department of State promotes an international policy environment and works to build partnerships that further our capabilities in AI technologies, protect our national and economic security, and promote our values. Accordingly, the Department engages in various bilateral and multilateral discussions to support responsible development, deployment, use, and governance of trustworthy AI technologies.

The Department provides policy guidance to implement trustworthy AI through theOrganization for Economic Cooperation and Development (OECD)AI Policy Observatory, a platform established in February 2020 to facilitate dialogue between stakeholders and provide evidence-based policy analysis in the areas where AI has the most impact.The State Department provides leadership and support to the OECD Network of Experts on AI (ONE AI), which informs this analysis.The United States has 47 AI initiatives associated with the Observatory that help contribute to COVID-19 response, invest in workforce training, promote safety guidance for automated transportation technologies, andmore.

The OECDs Recommendation on Artificial Intelligence is the backbone of the activities at the Global Partnership on Artificial Intelligence (GPAI) and the OECD AI Policy Observatory. In May 2019, the United States joined together with likeminded democracies of the world in adopting the OECD Recommendation on Artificial Intelligence, the first set of intergovernmental principles for trustworthy AI. The principles promote inclusive growth, human-centered values, transparency, safety and security, and accountability. The Recommendation also encourages national policies and international cooperation to invest in research and development and support the broader digital ecosystem for AI. The Department of State champions the principles as the benchmark for trustworthy AI, which helps governments design national legislation.

GPAI is a voluntary, multi-stakeholder initiative launched in June 2020 for the advancement of AI in a manner consistent with democratic values and human rights. GPAIs mandate is focused on project-oriented collaboration, which it supports through working groups looking at responsible AI, data governance, the future of work, and commercialization and innovation. As a founding member, the United States has played a critical role in guiding GPAI and ensuring it complements the work of the OECD.

In the context of military operations in armed conflict, the United States believes that international humanitarian law (IHL) provides a robust and appropriate framework for the regulation of all weapons, including those using autonomous functions provided by technologies such as AI. Building a better common understanding of the potential risks and benefits that are presented by weapons with autonomous functions, in particular their potential to strengthen compliance with IHL and mitigate risk of harm to civilians, should be the focus of international discussion. The United States supports the progress in this area made by the Convention on Certain Conventional Weapons, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems (GGE on LAWS), which adopted by consensus 11 Guiding Principles on responsible development and use of LAWS in 2019. The State Department will continue to work with our colleagues at the Department of Defense to engage the international community within the LAWS GGE.

Learnmore about what specific bureaus and offices are doing to support this policy issue:

TheGlobal Engagement Centerhas developed a dedicated effort for the U.S. Government to identify, assess, test and implement technologies against the problems of foreign propaganda and disinformation, in cooperation with foreign partners, private industry and academia.

The Office of the Under Secretary for Managementuses AI technologies within the Department of State to advance traditional diplomatic activities,applying machine learning to internal information technology and management consultant functions.

TheOffice of the Under Secretary of State for Economic Growth, Energy, and the Environmentengages internationally to support the U.S. science and technology (S&T) enterprise through global AI research and development (R&D) partnerships, setting fair rules of the road for economic competition, advocating for U.S. companies, and enabling foreign policy and regulatory environments that benefit U.S. capabilities in AI.

TheOffice of the Under Secretary of State for Arms Control and International Securityfocuses on the security implications of AI, including potential applications in weapon systems, its impact on U.S. military interoperability with its allies and partners,its impact on stability,and export controls related to AI.

TheOffice of the Under Secretary for Civilian Security, Democracy, and Human Rightsand its component bureaus and offices focus on issues related to AI and governance, human rights, including religious freedom, and law enforcement and crime, among others.

TheOffice of the Legal Adviserleads on issues relating to AI in weapon systems (LAWS), in particular at the Group of Governmental Experts on Lethal Autonomous Weapons Systems convened under the auspices of the Convention on Certain Conventional Weapons.

For more information on federalprograms and policyon artificial intelligence, visitai.gov.

Continued here:

Artificial Intelligence (AI) - United States Department of State

Artificial Intelligence (AI): What it is and why it matters

AI automates repetitive learning and discovery through data.Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks. And it does so reliably and without fatigue. Of course, humans are still essential to set up the system and ask the right questions.

AI adds intelligence to existing products. Many products you already use will be improved with AI capabilities, much like Siri was added as a feature to a new generation of Apple products. Automation, conversational platforms, bots and smart machines can be combined with large amounts of data to improve many technologies. Upgrades at home and in the workplace, range from security intelligence and smart cams to investment analysis.

AI adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that algorithms can acquire skills. Just as an algorithm can teach itself to play chess, it can teach itself what product to recommend next online. And the models adapt when given new data.

AI analyzes more and deeper data using neural networks that have many hidden layers. Building a fraud detection system with five hidden layers used to be impossible. All that has changed with incredible computer power and big data. You need lots of data to train deep learning models because they learn directly from the data.

AI achieves incredible accuracy through deep neural networks. For example, your interactions with Alexa and Google are all based on deep learning. And these products keep getting more accurate the more you use them. In the medical field, AI techniques from deep learning and object recognition can now be used to pinpoint cancer on medical images with improved accuracy.

AI gets the most out of data. When algorithms are self-learning, the data itself is an asset. The answers are in the data. You just have to apply AI to find them. Since the role of the data is now more important than ever, it can create a competitive advantage. If you have the best data in a competitive industry, even if everyone is applying similar techniques, the best data will win.

Read the original post:

Artificial Intelligence (AI): What it is and why it matters

Shaping the Future of Artificial Intelligence AI: The Significance of Prompt Engineering for Progress and Innovation – MarkTechPost

Shaping the Future of Artificial Intelligence AI: The Significance of Prompt Engineering for Progress and Innovation  MarkTechPost

Go here to see the original:

Shaping the Future of Artificial Intelligence AI: The Significance of Prompt Engineering for Progress and Innovation - MarkTechPost

Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

Original post:

Artificial Intelligence What it is and why it matters | SAS

Pinterest uses AI and your camera to recommend pins – Engadget

But the idea of Lens doesn't stop at shopping. For example, that picture of a table could list to a bunch of room decor ideas. Or you can take a photo of a pomegranate, for example, and it'll spit out recipes that uses pomegranate as a main ingredient. A picture of a sweater could lead to different styles of it and how to wear it. Basically think of Lens as a way to search for something when you just don't have the words to describe what it is you're looking at.

Of course, the technology is imperfect. Not all of us take crystal clear photos on our phones, and blurry and awkward shots will probably churn out the wrong results. That's why Pinterest says Lens is still in beta, and is considered somewhat experimental technology.

Pinterest launched a couple of other visual discovery features today as well. One is called Shop The Look, which uses object recognition to automatically detect and search for items in a photo. So a picture of a living room might prompt Pinterest to bring up a list of Buyable pins for the couch, the lamp, the table and the rug. The pins won't be for that brand of furniture specifically of course, but just items that look very similar.

Pinterest says that Shop The Look will also give you styling and decor ideas too. So far, the company has partnered with folks like Curalate, Olapic, Project September, Refinery 29 and ShopStyle to curate the looks. Brands and retailers that are on board include CB2, Macy's, Target, Neiman Marcus and Wayfair.

Last but not least, Pinterest also rolled out Instant Ideas, which is represented by a tiny circle at the bottom right of a pin. Tap it and you'll see a list of related ideas. The more you tap the pins you're interested in, the more customized your recommendations will be over time.

All of these features is live on Android and iOS starting today. It's available in the US for now, with more countries to be announced at a later date.

Read more:

Pinterest uses AI and your camera to recommend pins - Engadget

New Study Attempts to Improve Hate Speech Detection Algorithms – Unite.AI

Social media companies, especially Twitter, have long faced criticism for how they flag speech and decide which accounts to ban. The underlying problem almost always has to do with the algorithms that they use to monitor online posts. Artificial intelligence systems are far from perfect when it comes to this task, but there is work constantly being done to improve them.

Included in that work is a new study coming out of the University of Southern California that attempts to reduce certain errors that could result in racial bias.

One of the issues that doesnt receive as much attention has to do with algorithms that are meant to stop the spread of hateful speech but actually amplify racial bias. This happens when the algorithms fail to recognize context and end up flagging or blocking tweets from minority groups.

The biggest problem with the algorithms in regard to context is that they are oversensitive to certain group-identifying terms like black, gay, and transgender. The algorithms consider these hate speech classifiers, but they are often used by members of those groups and the setting is important.

In an attempt to resolve this issue of context blindness, the researchers created a more context-sensitive hate speech classifier. The new algorithm is less likely to mislabel a post as hate speech.

The researchers developed the new algorithms with two new factors in mind: the context in regard to the group identifiers, and whether there are also other features of hate speech present in the post, like dehumanizing language.

Brendan Kennedy is a computer science Ph.D. student and co-lead author of the study, which was published on July 6 at ACL 2020.

We want to move hate speech detection closer to being ready for real-world application, said Kennedy.

Hate speech detection models often break, or generate bad predictions, when introduced to real-world data, such as social media or other online text data, because they are biased by the data on which they are trained to associate the appearance of social identifying terms with hate speech.

The reason the algorithms are oftentimes inaccurate is that they are trained on imbalanced datasets with extremely high rates of hate speech. Because of this, the algorithms fail to learn how to handle what social media actually looks like in the real world.

Professor Xiang is an expert in natural language processing.

It is key for models to not ignore identifiers, but to match them with the right context, said Ren.

If you teach a model from an imbalanced dataset, the model starts picking up weird patterns and blocking users inappropriately.

To test the algorithm, the researchers used a random sample of text from two social media sites that have a high-rate of hate speech. The text was first hand-flagged by humans as prejudiced or dehumanizing. The state-of-the-art model was then measured against the researchers own model for inappropriately flagging non-hate speech, through the use of 12,500 New York Times articles with no hate speech present. While the state-of-the-art models were able to achieve 77% accuracy in identifying hate vs non-hate, the researchers model was higher at 90%.

This work by itself does not make hate speech detection perfect, that is a huge project that many are working on, but it makes incremental progress, said Kennedy.

In addition to preventing social media posts by members of protected groups from being inappropriately censored, we hope our work will help ensure that hate speech detection does not do unnecessary harm by reinforcing spurious associations of prejudice and dehumanization with social groups.

See the original post here:

New Study Attempts to Improve Hate Speech Detection Algorithms - Unite.AI

This AI tool helps healthcare workers look after their mental health – The European Sting

Credit: Unsplash

This article is brought to you thanks to the collaboration ofThe European Stingwith theWorld Economic Forum.

Author: Francis Lee, Psychiatrist-in-Chief, New-York- Presbyterian Hospital, Conor Liston, Director, Sackler Institute, Weill Cornell Medicine & Laura L. Forese, Executive Vice President & Chief Operating Officer, New-York-Presbyterian Hospital

As the COVID-19 pandemic continues to exert pressure on global healthcare systems, frontline healthcare workers remain vulnerable to developing significant psychiatric symptoms. These effects have the potential to further cripple the healthcare workforce at a time when a second wave of the coronavirus is considered likely in the fall, and workforce shortages already pose a serious challenge.

Studies show that healthcare workers are also less likely to proactively seek mental health services due to concerns about confidentiality, privacy and barriers to accessing care. Thus, there is an obvious and pressing need for scalable tools to act as an early warning system to alert healthcare workers when they are at risk of depression, anxiety or trauma symptoms and then rapidly connect them with the help they need. To address the mental health needs of the 47,000 employees and affiliated physicians in our hospital system, New York-Presbyterian (NYP) has developed an artificial intelligence (AI)-enabled digital tool that screens for symptoms, provides instant feedback, and connects participants with crisis counselling and treatment referrals.

Called START (Symptom Tracker And Resources for Treatment), this screening tool enables healthcare workers to confidentially and anonymously track changes in their mental health status. This tool is unique in that it not only provides immediate feedback to participants on the severity of their symptoms but also connects them to existing mental healthcare resources. Participants are asked every two weeks to complete a short battery of questions that assess symptoms of depression, anxiety, trauma and perceived stress, as well as potential risk factors for poor mental health and ability to function at work.

To maximise engagement, the psychiatric symptom questions in the START platform are drawn from widely validated psychiatric screening tools and adaptively selected using AI algorithms that capture the most relevant clinical symptom data in a time-efficient manner. This is achieved in two ways. First, the START platform automatically selects the most informative questions based on a participants previous responses in a minimum amount of time (around five to seven minutes). Second, it focuses on questions that are reliably correlated with particular functional connectivity patterns in depression-related brain networks. Much like our national airport network, brain networks are organised into a system of hubs that facilitate efficient information flow, just as hub airports like OHare and JFK connect passengers with smaller regional destinations. Disrupted connections between brain hubs may contribute to specific symptoms and behaviours in depression.

For example, in previous work (see figure below), our group has found that psychiatric symptoms like anhedonia (a loss of interest in pleasurable activities) are reliably correlated with functional magnetic resonance imaging (fMRI) measures of connectivity in reward-related brain regions, whereas symptoms like anxiety and insomnia are correlated with differing connectivity alterations in other brain areas.

At the end of the survey, participants receive feedback on their results and are provided with options for connecting with existing and accessible mental healthcare resources. For those who need psychiatric care for their symptoms, we integrated the START platform with a telemedicine urgent counselling service at NYP that is available seven days a week and which provides faculty and staff across NYP hospitals with quick and free access to confidential and supportive virtual counselling by trained mental health professionals a special feature of this tool and our COVID-19 response. This is important, because if treatment resources are not made immediately available and easily accessible to our healthcare workers, they may be less likely to seek help when they need it.

Within one week of deploying the symptom tracker, the utilization of our urgent counselling services had more than doubled, resulting in numerous referrals to mental health professionals. Another key element contributing to the increase in utilization was frequent communication from NYP leadership about the Symptom Tracker and the availability of crisis support. In the near future, a mobile cognitive behavioral therapy (CBT) app developed at NYP (by a group led Francis Lee) will be linked to START to target specific mood, anxiety, and trauma symptom profiles, and is currently being tested in a clinical trial for safety and efficacy.

Ultimately, we hope that such emerging digital tools will transform mental health services not only for our healthcare workers but also for larger populations affected by the pandemic.

Go here to see the original:

This AI tool helps healthcare workers look after their mental health - The European Sting

An AI Can Now Predict How Much Longer You’ll Live For – Futurism

In Brief Researchers at the University of Adelaide have developed an AI that can analyze CT scans to predict if a patient will die within five years with 69 percent accuracy. This system could eventually be used to save lives by providing doctors with a way to detect illnesses sooner. Predicting the Future

While many researchers are looking for ways to use artificial intelligence (AI) to extend human life, scientists at the University of Adelaidecreated an AI that could help them better understand death. The system they created predicts ifa person will die within five years after analyzingCT scans of their organs, and it was able to do sowith 69 percent accuracy a rate comparable to that of trained medical professionals.

The system makes use of thetechnique of deep learning, and it was tested using images taken from 48 patients, all over the age of 60. Its the first study to combine medical imaging and artificial intelligence, and the results have been published in Scientific Reports.

Instead of focusing on diagnosing diseases, the automated systems can predict medical outcomes in a way that doctors are not trained to do, by incorporating large volumes of data and detecting subtle patterns, explained lead authorLuke Oakden-Rayner in a university press release. This method of analysis can explore the combination of genetic and environmental risks better than genome testing alone,according to the researchers.

While the findings are only preliminary given the small sample size, the next stage will apply the AI to tens of thousands of cases.

While this study does focus on death, the most obvious and exciting consequence of it is how it could help preserve life. Our research opens new avenues for the application of artificial intelligence technology in medical image analysis, and could offer new hope for the early detection of serious illness, requiring specific medical interventions, said Oakden-Rayner. Because it encourages more precise treatment using firmer foundational data, the system has the potential to save many lives and provide patients with less intrusive healthcare.

An added benefit of this AI is its wide array of potential uses. Because medical imaging of internal organs is a fairly routine part of modern healthcare, the data is already plentiful. The system could be used to predict medical outcomes beyond just death, such as the potential for treatment complications, and it could work with any number of images, such as MRIs or X-rays, not just CT scans. Researchers will just need to adjustthe AItotheir specifications, andtheyll be able to obtain predictions quickly and cheaply.

AIsystems are becoming more and more prevalentin the healthcare industry.Deepmind is being usedto fight blindness in the United Kingdom, and IBM Watson is already as competent as human doctors at detecting cancer. It is in medicine, perhaps more than any other field, that we see AIs huge potential to help the human race.

Read more from the original source:

An AI Can Now Predict How Much Longer You'll Live For - Futurism

Best Competency With Artificial Intelligence is by Having Intelligent Experience – ReadWrite

AI is changing the way customers interact with businesses. AI changes everything with how websites and bots will work along with many other tools and integrated systems. Businesses protect and manage digital assets and data of the company. There is a day-to-day struggle in businesses currently using artificial intelligence, which is made more difficult because of sequential technologies.

Many businesses are intrigued by the idea of turning to artificial intelligence for help in the sales process. AI is certainly capable of finding your best-qualified sales leads. AI can give you efficient issue resolution, and systems that feed actual data back in for future process and product improvements. However, most enterprises do not know where or how to get started with their new company AI.

Systems and data must connect to allow full use of capabilities as if all information were native to each. And also, edgeways to present information to end-users, though data is evolving on a constant basis. The environment requires specialized insight and know-how to ensure a smooth and continuous integration thats both relevant and current.

The intelligent experience is all about leveraging AI to derive predictive insights that can be embedded in the workflow. Companies seeking competitive advantage must find ways to make their business operations more intelligent.

AI functionality is poised to be a game-changer, exploring possibilities and opening up new roles and more business-central activities. However, its important to first understand how intelligent experience can help improve? It starts with a shift in focus.

Artificial intelligence is edging into business processes across organizations, however, when an organization interacts with the use of AI correctly, that shouldnt be a sign AI is running the experience behind the scenes.

AI has the power to make customers feel they are making their choices, but its the machine learning and the algorithms that are handling those decisions.

The most useful sense, when it comes to shifting in focus, is vision keeping track of the ability to give suggestions on how to improve.

Artificial Intelligence is going beyond the senses and going straight to the source the brain. The very reactive tactic, oftentimes, companies are late, identifying customers likely when its too late. This is because there is a major difference between predicting significant changes in the economy and a financial sign that becomes apparent only after a large shift has taken place.

Artificial Intelligence aims to heavily impact a number of industries worldwide shaping online customer experience models. The AI technology will take hold across many industries over the coming decade, and businesses firmly need to decide how AI will help them to optimize conversions.

Automating most internal processes, the operational effort involved in maintaining and controlling devices is reduced. However, simultaneously shifting focus, the marketplace, significantly allows configuration.

More cost-efficiency is rising from artificial intelligence, so customers can focus on increasing the quality and operations of their processes with just an increase in resources.

It is crucial to assess the landscape of the acquisition time period. This often is where perceptive relations start to form. Customers are going to be comparing their initial experience to the expectations entrepreneurs set during the sales process.

Processes of Artificial Intelligence are making significant progress in reducing several walks of life problems. It also provides automation of not-get interpretation and grasping, restructure the information.

With AI, as per the market, you can spur on processes, get value from data, and provide clients with a better experience. All those benefits can help drive sales and boost revenue.

The application of the AI system may now be defined in considerable detail. As of a rule, the cost of Artificial Intelligence requires intelligence on the work being done for proactive development. The development work is usually split into several feasibility studies and set business and project objectives.

However, if Artificial intelligence claims to be a plug-and-play canned legacy, you need to be highly suspicious. You need to have someone trained to take care of this system. (source: coseer.com.)

The sufficient algorithm performance is a key cost-effective factor, as often a high-quality algorithm requires a round of tuning sessions. To decide between various algorithmic approaches towards businesses, one needs to understand how exactly inculcation takes place under the hood, and what can be done to obtain competency.

If it is not clear up-front, one may end up in a situation of not-more-performing. AI is certainly exciting, but business owners cannot jump into it without first laying the foundation with basic analytics.

With so many possibilities for applying AI across an organization, in all likelihood, deploying an AI system must be effective. AI is often considered solely from a technology perspective and little wonder since its capabilities rely onand continually improve throughtechnical innovations.

Deploy with quick-witted positioned skills and a variety of tools to create AI algorithms that can be inserted into enterprise applications. Quick wins bring an added bonus. Meaning that getting the most out of AI is about validating AIs ability to spark value, keeping momentum and funding, and going for longer-term projects.

AI doesnt thrive in a vacuum. Businesses that generate value from AI deal with it as a major business transformation initiative that requires non-similar parts of the company to come together and work with probable expectations. AI is the future of business operations.

When contemplating an investment in AI, be sure you have pragmatic predictions and have a setup that will allow you to embed insights into the daily workflow of your organization. Through the power of AI, you can start blurring the lines between sales, service, and marketing.

The power of artificial intelligence needs a hard edge at business processes and the majority of resources. From there, your company can use AI in a way that actually helps your business grow and ultimately boost your bottom line.

Image Source: Pexels

Adedeji Omotayo is a Digital marketer, PR expert, content writer; the CEO, founder, and president of EcoWebMedia, a full-service digital marketing company. Adedeji is passionate about technology, marketing, and at the same time work with both small and big companies on their internet marketing strategies.

The rest is here:

Best Competency With Artificial Intelligence is by Having Intelligent Experience - ReadWrite

How AI is revolutionizing healthcare – Nurse.com

AI applications in healthcare can literally change patients lives, improving diagnostics and treatment and helping patients and healthcare providers make informed decisions quickly.

AI in the global healthcare market (the total value of products and services sold) was valued at $2.4 billion in 2019 and is projected to reach $31.02 billion in 2025.

Now in the COVID-19 pandemic, AI is being leveraged to identify virus-related misinformation on social media and remove it. AI is also helping scientists expedite vaccine development, track the virusand understand individual and population risk, among other applications.

Companies such as Microsoft, which recently stated it will dedicate $20 million to advance the use of artificial intelligence in COVID-19 research, recognize the need for and extraordinary potential of AI in healthcare.

The ultimate goal of AI in healthcare is to improve patient outcomes by revolutionizing treatment techniques. By analyzing complex medical data and drawing conclusions without direct human input, AI technology can help researchers make new discoveries.

Various subtypes of AI are used in healthcare. Natural language processing algorithms give machines the ability to understand and interpret human language. Machine learning algorithms teach computers to find patterns and make predictions based on massive amounts of complex data.

AI is already playing a huge role in healthcare, and its potential future applications are game-changing. Weve outlined four distinct ways that AI is transforming the healthcare industry.

This transformative technology has the ability to improve diagnostics, advance treatment options, boost patient adherence and engagement, and support administrative and operational efficiency.

AI can help healthcare professionals diagnose patients by analyzing symptoms, suggesting personalized treatments and predicting risk. It can also detect abnormal results.

Analyzing symptoms, suggesting personalized treatments and predicting risk

Many healthcare providers and organizations are already using intelligent symptom checkers. This machine learning technology asks patients a series of questions about their symptoms and, based on their answers, informs them of appropriate next steps for seeking care.

Buoy Health offers a web-based, AI-powered health assistant that healthcare organizations are using to triage patients who have symptoms of COVID-19. It offers personalized information and recommendations based on the latest guidance from the Centers for Disease Control and Prevention.

Additionally, AI can take precision medicine healthcare tailored to the individual to the next level by synthesizing information and drawing conclusions, allowing for more informed and personalized treatment. Deep learning models have the ability to analyze massive amounts of data, including information about a patients genetic content, other molecular/cellular analysis and lifestyle factors and find relevant research that can help doctors select treatments.

AI can also be used to develop algorithms that make individual and population health risk predictions in order to help improve outcomes. At the University of Pennsylvania, doctors used a machine learning algorithm that can monitor hundreds of key variables in real time to anticipate sepsis or septic shock in patients 12 hours before onset.

Detecting disease

Imaging tools can advance the diagnostic process for clinicians. The San Francisco-based company Enlitic develops deep learning medical tools to improve radiology diagnoses by analyzing medical data. These tools allow clinicians to better understand and define the aggressiveness of cancers. In some cases, these tools can replace the need for tissue samples with virtual biopsies, which would aid clinicians in identifying the phenotypes and genetic properties of tumors.

These imaging tools have also been shown to make more accurate conclusions than clinicians. A 2017 study published in JAMA found that of 32 deep learning algorithms, seven were able to diagnose lymph node metastases in women with breast cancer more accurately than a panel of 11 pathologists.

Smartphones and other portable devices may also become powerful diagnostic tools that could benefit the areas of dermatology and ophthalmology. The use of AI in dermatology focuses on analyzing and classifying images and the ability to differentiate between benign and malignant skin lesions.

Using smartphones to collect and share images could widen the capabilities of telehealth. In ophthalmology, the medical device company Remidio has been able to detect diabetic retinopathy using a smartphone-based fundus camera, a low-power microscope with an attached camera.

AI is becoming a valuable tool for treating patients. Brain-computer interfaces could help restore the ability to speak and move in patients who have lost these abilities. This technology could also improve the quality of life for patients with ALS, strokes, or spinal cord injuries.

There is potential for machine learning algorithms to advance the use of immunotherapy, to which currently only 20% of patients respond. New technology may be able to determine new options for targeting therapies to an individuals unique genetic makeup. Companies like BioXcel Therapeutics are working to develop new therapies using AI and machine learning.

Additionally, clinical decision support systems can help assist healthcare professionals make better decisions by analyzing past, current and new patient data. IBM offers clinical support tools to help healthcare providers make more informed and evidence-based decisions.

Finally, AI has the potential to expedite drug development by reducing the time and cost for discovery. AI supports data-driven decision making, helping researchers understand what compounds should be further explored.

Wearables and personalized medical devices, such as smartwatches and activity trackers, can help patients and clinicians monitor health. They can also contribute to research on population health factors by collecting and analyzing data about individuals.

These devices can also be useful in helping patients adhere to treatment recommendations. Patient adherence to treatment plans can be a factor in determining outcome. When patients are noncompliant and fail to adjust their behaviors or take prescribed drugs as recommended, the care plan can fail.

The ability of AI to personalize treatment could help patients stay more involved and engaged in their care. AI tools can be used to send patients alerts or content intended to provoke action. Companies like Livongo are working to give users personalized health nudges through notifications that promote decisions supporting both mental and physical health.

AI can be used to create a patient self-service model an online portal accessible by portable devices that is more convenient and offers more choice. A self-service model helps providers reduce costs and helps consumers access the care they need in an efficient way.

AI can improve administrative and operational workflow in the healthcare system by automating some of the process. Recording notes and reviewing medical records in electronic health records takes up 34% to 55% of physicians time, making it one of the leading causes of lost productivity for physicians.

Clinical documentation tools that use natural language processing can help reduce the time providers spend on documentation time for clinicians and give them more time to focus on delivering top-quality care.

Health insurance companies can also benefit from AI technology. The current process of evaluating claims is quite time-consuming, since 80% of healthcare claims are flagged by insurers as incorrect or fraudulent. Natural language processing tools can help insurers detect issues in seconds, rather than days or months.

More here:

How AI is revolutionizing healthcare - Nurse.com