‘Swarm AI’ predicts winners for the 2017 Academy Awards – TechRepublic

Image: LimaEs, Getty Images/iStockphoto

Wondering who will win the 2017 Oscars? Instead of turning to industry experts, film critics, or polls, you can try something else this year: Artificial intelligence.

A startup called Unanimous A.I. has been making predictionslike who will win the Superbowl, March Madness, US presidential debates, the Kentucky Derbyfor the last two years. It uses a software platform called UNU to assemble people at their computers, who make a real-time prediction together.

UNU's algorithm is built to harness the concept of "swarm" intelligencethe power of a group to make an intelligent, collective decision. It's how flocks of birds or bees decide where to travel for the winter, for instancea decision that no single entity could make on its own. The decisions are made quickly, in under a minute each.

When UNU first predicted the Oscars in 2015, it took a group of non-experts to guess the Academy Award winnersand the results were better than those from FiveThirtyEight, The New York Times, and a slew of other experts. When it predicted the 2016 Oscars last year, the platform achieved 76% accuracyoutperforming Rolling Stone and the LA Times.

This week, it met the challenge again, assembling a group of 50 movie fans to make real-time predictions.

The method produces answers that are better than each individual selection. It's not an average. Each user on the platform has a virtual "puck" that it can drag to the answer it chooses, like a digital Ouija board. By giving users the ability to see the other picks, it gives people the opportunity to change their mind in the middle of the question. Each member of the group influences each other this way. If the group decision is heading toward one of two selections that the user did not originally pick, there's an opportunity to advocate for a different choice.

The reason polls, surveys, prediction markets, and expert opinions are different from the swarm? In all of the previous methods, decisions are made individually, sequentially. In a swarm, the decision is made simultaneously.

SEE: How 'artificial swarm intelligence' uses people to make better predictions than experts

Unanimous A.I. CEO Louis Rosenberg previously told TechRepublic that most people in the swarms have not seen all of the movies. Still, the swarm is successful because "fill in each other's gaps in knowledge."

Here are Unanimous A.I.'s predictions for the winners of the major awards in the 2017 Academy Awards (click the hyperlinks to see the swarms in action):

Best Picture: La La LandBest Actress in a Leading Role: Emma Stone (La La Land) Best Actor in a Leading Role: Denzel Washington (Fences) Best Director: Damien Chazelle (La La Land) Best Actress in a Supporting Role: Viola Davis (Fences) Best Actor in a Supporting Role: Mahershalla Ali (Moonlight) Best Foreign Language Film: The Salesman

Most of the predictions are in line with industry experts and polls, which show La La Land to be the favorite. But there are three categories here to watch, in which the swarm was not confident in its predictionsit was conflicted between two options. These categories are: Best Actor, Best Original Screenplay, and Best Foreign Film.

For instance, many experts predict that Casey Affleck will win for Best Actor, but the swarm chose Denzel Washington. "The experts are weighing previous results heavily, most notably the Golden Globes, which Casey Affleck won last month," Rosenberg told TechRepublic about the new predictions. "But the Golden Globes is composed of the Hollywood Foreign Press, a very narrow demographic compared to the Academy." Rosenberg said he thinks the Swarm's pick shows that it's more in line with the Academy.

Image: Unanimous A.I.

Beyond predicting sports games and entertainment, the swarm method has bigger implications. Rosenberg has seen a lot of interest from marketing companies who want to learn how customers would respond to a certain advertisement or product. A new tool offered by Unanimous A.I. called Swarm Insight could help businesses assess how effective their messages are, how they should think about pricing, and when it's worth taking a risk.

Go here to see the original:

'Swarm AI' predicts winners for the 2017 Academy Awards - TechRepublic

AI Can Help Life Insurers Hear What the Customers Are Saying – ThinkAdvisor

(Credit: Thinkstock)

Over the years banks, retailers, and telecommunications have embraced new technologies such as artificial intelligence (AI) and machine learning (ML) to enhance their customer communication and, ultimately, customer experience. While there may have been a time to sit back and see what works and what doesnt, these are now proven technologies. Organizations in the life insurance space need to adopt this thinking or risk being left behind.

Not only is todays customer more tech-savvy than ever and expects the same high levels of convenience from their insurers as they do from other companies, but the last few months have put a spotlight on the need for a better digital communication customer experience.

The renewed battle over federal and state annuity sales standards further emphasizes the value of addressing customers concerns.

(Related:5 Ways Annuity Providers Can Use Tech for Fiduciary Compliance)

By meeting and, hopefully, exceeding these expectations, insurers can ensure customer loyalty in an increasingly competitive market.

That in turn comes with several major business benefits, including increased revenue.

Of course, this kind of transformation wont happen overnight.

Historically, life insurers havent done well when it comes to customer communication. Within the last few years, more than 90% of insurers worldwide did not communicate with their customers even once a year, and many customers did not receive a single communication all year. Of those interactions, many were limited to claims and related advice.

In a world where highly personalized products and services, supported by relevant, easy to understand and contextual information have become the norm, thats no longer viable.

In order to overcome that deficit, it wont be enough for life insurers to simply retrain their staff and hope for the best. Instead, they need to embrace the kind of technologies that will have an immediate positive impact on customer experience.

Enter AI and ML. While some life insurers have adopted these technologies to great effect in the back office to speed up claims processes and in fraud detection, for instance they havent been used to anywhere near the same effect when it comes to customer communications, which is a missed opportunity.

AI improves customer experience through the analysis of data on hand in order to decide the next message that is best suited to each customer, based on actions taken with the insurer, demographics, and changes in life stages that change a customers needs.

By delivering the right message to the right person, at the right time, an organization can dramatically improve the customer experience.

That relevance and timeliness, meanwhile, is most likely to result in the response the business wants: a policy renewal, an upsell or a new sale.

As a subset of AI focused on automating tasks, ML can help decide which content is suited to a customer based on data on hand, such as past behavior, demographics and location, making it easier to deliver truly hyper-personalized communication.

There are several ways organizations incorporate AI in their customer communications beyond hyper-personalization and next-best-action messaging. These include customizing customer journey touch-points (ensuring that tone is appropriate when the customer has suffered a loss, for instance) and allowing customers to interact with communications using voice-enabled tech (such as smart devices). Integrating chatbots into customer communications additionally allows for one-way communication to become a conversation.

By making these changes, life insurers can give their customers a markedly improved experience.

In doing so, they encourage loyalty and additional spend, which can ultimately benefit their bottom line.

Simply put, there are a multitude of reasons why insurers should embrace AI in their customer communications.

Connect with ThinkAdvisor Life/Health onFacebook,LinkedInandTwitter.

Mia Papanicolaou is the chief operating officer at Striata, a digital communications strategy company based in Johannesburg.

Go here to see the original:

AI Can Help Life Insurers Hear What the Customers Are Saying - ThinkAdvisor

MIT creates an AI to predict urban decay – TNW

Facebook volunteers and work-at-home moms might be making city planning decisions, thanks to AI research conducted by MIT scientists. Researchers from MITs media lab have been feeding computers a steady stream of data for the last four years to build an AI capable of determining why some cities grow and others decay.

The data the researchers are using has been compiled from people, regular Joes and Janes, who choose between two randomly selected pictures to determine which one seems less dangerous or more appealing. Currently its all common-sense driven: most of us would agree a typically beautiful environment will foster growth better than a landscape of derelict buildings.

Finally, with enough data, the AI has been returning results which have been compared with human responses to the same image pairings. The researchers proofed their data by comparing responses from Amazon Mechanical Turk workers. According to MIT the robots got it right a little more than 70% of the time, which was better than expected. In the future MIT researchers plan on increasing the number of people contributing data, going so far as to say they may need to advertise on Facebook to draw more participants.

At first-glance it doesnt sound very impressive theyre just feeding data into an algorithm by hand based on thousands of different human interpretations. People decide which Google Map image in a pair looks like a nicer neighborhood, scientists determine if the machine agrees, and vice-versa.

The ultimate goal is for us to glean insight into our problems by learning what the machines can teach us about ourselves. ProfessorCsar Hidalgo, the director of the Collective Learning group at the MIT Media Lab, told Co.Design:

I do hope that this research starts helping us understand how the urban environment affects people and how its affected by people so that when we do policy in the context of urbanplanning, we have a more scientific understanding of the effect different designs have in the behaviors of the populations that use them

Until the machines learn to define beauty for themselves which is a scary thought well need to explain to them why one citys streets are lined with despair and another shows the promise of growth and renewal. Once AI is up to speed, however, well be able to start saving dying communities with machine-learning. Computers can draw exponentially more patterned-based conclusions than humans.

The better we can understand an issue, the more connections we can determine and the better our solutions will be. Thanks to MIT we may be on the verge of solving decades-old urban rejuvenation problems.

AI Is Reshaping What We Know About Cities on Co.Design

Read next: Parents, social media isnt turning your kids into robots

Read the original post:

MIT creates an AI to predict urban decay - TNW

How AI Can Help Manage Mental Health In Times Of Crisis – Forbes

Much has been written in the past few weeks about the COVID-19 crisis and the ripple effects that will impact human society. Beyond the immediate effect of the virus on health and mortality, it is clear that we are also facing a global, massive financial crisis that is likely to affect our lives for years to come. These changes, along with the expected prolonged social isolation, are bound to have a devastating effect on our mental health, collectively and individually, and, in turn, cause a dramatic deterioration in overall health and an increase in the prevalence of chronic illness.

From research conducted by the World Health Organization, we know that most people affected by emergency situations experience immediate psychological distress, hopelessness and sleep issues -- and that 22% of people are expected to develop depression, anxiety, post-traumatic stress disorder, bipolar disorder or schizophrenia. This escalation comes on top of a baseline of 19.1% of U.S. adults experiencing mental illness (47.6 million people in 2018, according to the Substance Abuse and Mental Health Services Administration). We further know that rising depression rates are associated with a variety of chronic health conditions, including obesity, coronary heart disease and diabetes, so the domino effect does not end with mental health.

This prediction may sound like an eschatological prophecy of dystopia, but there are good reasons to be optimistic too. At our disposal, we now have myriad clinical-grade digital tools and applications designed to treat and prevent anxiety and depression. All it takes is a Wi-Fi connection and a mobile phone to provide digital treatment that can reach everyone. Even more encouraging are the recent advances in the use of artificial intelligence in mental health -- more specifically augmented intelligence, the ability to embed the collective knowledge and care of humans into digital applications.

Such an approach attempts to make the best of both worlds -- the human connection along with the rich, often gamified digital experience that is driven by data science. For example, research scientists at the University of Utah founded Lyssn, a product that uses deep learning algorithms for analyzing the recording and sharing of psychotherapy conversations for training and quality assurance purposes. This is a manual and expensive process usually conducted by a panel of psychotherapists. Lyssns product is trained using a broad range of therapists, so the advantages are not only cost and time, but also consistency and reduced bias to any particular attitude or approach.

Other companies offer a range of therapy chatbots: X2AIs bot, Sara, uses natural language processing to engage users in conversations on Facebook Messenger, helping them manage stress and anxiety. Another example is Lark Health, a chatbot that is directed at managing diabetes and hypertension, gathering and analyzing sleep, weight and nutrition information from users in daily conversation.

Such applications distill collective human knowledge into a digital experience, providing users with 24/7 access to a therapist representing a cohort of hundreds of clinicians, who are trained in a variety of different disciplines.

The challenge ahead is to go beyond the mechanics of therapeutic conversation and to model the human alliance or bond that human therapists establish with clients. For this purpose, joint teams of data scientists, clinicians and writers (like those working on team Anna at my company) need to work on creating conversational experiences that have the capacity to express curiosity about users, develop a deeper understanding of their lives, and be emotionally sensitive and attuned.

Going beyond the mechanics of interaction and attempting to build a superhuman digital therapist requires:

Establishing A Single Transdisciplinary Team: Data scientists, clinicians and content editors speak different languages. To avoid creating a modern Tower of Babel, it is critical to help them establish the same language by working closely in a single nimble and cohesive team.

Starting With A Clear Model Of A Therapist: Empathy, care and listening are a result of patterns of interaction that need to be explicitly modeled. Prior to any work on specific interventions, it is critical to specify these patterns in detail. For example, what are the personality traits of the digital therapist? What triggers in the conversation does it respond to? Which goals is it trying to accomplish? What language does it avoid using? What are the ways in which it shows interest? Curiosity? Support? How does it manage the trade-off between persisting with its own agenda to being flexible and letting the user take the lead?

Defining A Few Simple Criteria For Success: Beyond the standard quantitative methods for assessing efficacy, define clearly how you assess the degree to which the AI was successful in establishing an alliance with the user. What does the ideal user feedback sound like? What would you want users to say when they describe the digital therapist?

Talking To Users On A Daily Basis: The experience of digital therapy is made by assembling multiple, and often complex, algorithms and mechanisms together. To make sure you are investing in the right places, always talk to users and ask them to recall specific parts of the interaction that made them feel a sense of alliance and bond. You may discover that the simplest patterns of interaction are the most important ones.

These tools will not replace the couch, tissue box and innate professionalism of the therapist's office, but they may very well keep us healthier in times when we can't make it to that office. In times of crisis, like our current situation and those that will inevitably crop up in the future, it's important to know what our options are and work toward a healthier future in whatever ways we can.

Excerpt from:

How AI Can Help Manage Mental Health In Times Of Crisis - Forbes

Artificial intelligence to influence top tech trends in major way in next five years – The National

Artificial intelligence will be the common theme in the top 10 technology trends in the next few years, and these are expected to quicken breakthroughs across key economic sectors and society, the Alibaba Damo Academy says.

The global research arm of Chinese technology major Alibaba Group says innovation will be extended from the physical world to a mixed reality, as more innovation finds its way to industrial applications and digital technology drives a green and sustainable future.

"Digital technologies are growing faster than ever," Jeff Zhang, president of Alibaba Cloud Intelligence and head of Alibaba Damo, said in a report released on Monday.

"The advancements in digitisation, 'internetisation' and intelligence are redefining a digital world that is characterised by the prevalence of mixed reality.

"Digital technology plays an important role in powering a green and sustainable future, whether it is applied in industries such as green data centres and energy-efficient manufacturing, or in day-to-day activities like paperless office."

The report was compiled after analysing millions of public papers and patent filings over the past three years and conducting interviews with about 100 scientists.

Clouds, networks and devices will have a more clearly defined division of labour in the coming years. Photo: Alibaba Damo

The rapid development of new network technology will fuel the evolution of cloud computing towards a new system called cloud-network-device convergence.

The system will allow clouds, networks and devices to have a more clearly defined division of labour.

Clouds will function as brains and will be responsible for centralised computing and global data processing, while networks will serve as the interconnecting tracks that join various forms of networks on the cloud to build an ubiquitous, low-latency network.

The global cloud computing market is projected to grow to $947.3 billion by 2026, from $445.3bn in 2021, according to data platform Markets and Markets, with adoption set to increase in sectors where initiatives to work from home are prevalent.

AI is pegged to replace computers as the main production tool in scientific discovery. Photo: Alibaba Damo

AI would be a boon to scientists, with Alibaba Damo saying it will replace computers as the main production tool in scientific discovery, helping to improve efficiency in each phase of the research process from the formation of initial hypothesis to experimental procedures and the distillation of experimental findings.

This will shorten research cycles and improve the productivity of scientists.

Machine learning can process massive amounts of multidimensional and multimodal data and solve complex scientific problems, allowing scientific exploration to flourish in areas previously thought impossible, it said. As such, AI will also help to discover new scientific laws.

The global scientific research and development services sector is, unsurprisingly, a big market. The sector is forecast to grow to $822.49bn this year and $1.3 trillion by 2026, from $725.56bn in 2021, data provider ReportLinker says.

Cloud computing and AI will drive the rapid development of and demand for silicon photonics technology, Photo: Alibaba Damo

As defined by Intel, a silicon photonic chip is the combination of two of the most important inventions of the 20th century the silicon integrated circuit and the semiconductor laser. Unlike its electronic counterparts, it enables faster data transfers over longer distances.

The rise of cloud computing and AI will drive the rapid development of silicon photonics technology and demand. The widespread use of the chips is expected in the next three years.

Research company Markets and Markets predicts that the market will grow to $4.6bn by 2027, from $1.1bn in 2021.

The current challenges, according to Alibaba Damo, are mainly in the supply chain and manufacturing processes since the design, mass production and packaging of silicon photonic chips have not been standardised and scaled up, leading to low production capacity, low yields and high costs.

Applying AI in the renewable energy sector can also contribute to achieving carbon-neutrality. Photo: Alibaba Damo

Renewable energy is one of the sectors attracting the most attention as governments prioritise sustainability. But due to the unpredictable nature of renewable energy power generation, integrating renewable energy sources into the power grid presents challenges that affect the safety and reliability of the grid.

Alibaba Damo said the application of AI in the industry is critical and indispensable in capacity prediction, the scheduling of optimisation, performance evaluations, failure detection and risk management, all of which translates into improving the efficiency and automation of electric power systems and maximising resource use and stability. It would also be a key factor for achieving carbon neutrality.

A recent report by Allied Market Research said the global renewable energy market, which was worth $881.7bn in 2020, is expected to reach about $2tn by 2030.

Major economies have programmes in place to make renewables a significant part of their energy mix by that year: the US and China are on track to generate up to 50 per cent and 40 per cent, respectively, of their electricity from renewables.

The convergence of AI and precision medicine is expected to boost the integration of medical expertise. Photo: Alibaba Damo

As the Covid-19 pandemic has proved, any unexpected medical crisis will force the industry to hasten its research to achieve pinpoint accuracy.

With the medical field highly dependent on individual expertise involving a lot of trial and error, coupled with varying efficacies from patient to patient, the convergence of AI and precision medicine is expected to boost the integration of expertise and new auxiliary diagnostic technology.

It will serve as a "high-precision compass" for clinical medicine a compass that doctors can use to diagnose diseases and make medical decisions as quickly and accurately as possible.

The medical world is already reaping the advantages of AI. For example, using AI in the early screening of breast cancers can reduce the false negative rate by 5.7 per cent in the US and 1.2 per cent in the UK, Alibaba Damo said, citing country statistics.

The global precision medicine market is poised to grow to $118.32bn by 2025, from $72.58bn in 2021, driven by companies resuming their operations and adapting to the new normal while recovering from the effects of Covid-19, according to ReportLinker.

Privacy-preserving computation techniques and its successors will be critical to effective, safe and secure data sharing. Photo: Alibaba Damo

Privacy-preserving computation, as its name implies, is the use of techniques to process data in utility bills, for example without revealing a user's information. In an era where one of the largest challenges is ensuring the security of data while allowing it to flow freely between computing entities, this vertical is gaining traction as a viable solution to this challenge.

Alibaba Damo said that the next three years will see groundbreaking improvements in the performance and interpretability of privacy-preserving computation, and witness the emergence of data trust entities that provide data sharing services based on the technology.

Research company Gartner says that by 2025, half of large organisations will introduce privacy-enhancing computation for processing data in untrusted environments, while professional services company Accenture said its techniques will be critical to effective, safe and secure data sharing.

However, the application of the technology has been limited to a narrow scope of small-scale computation due to performance bottlenecks, lack of confidence in the technology and standardisation issues, Alibaba Damo said.

The next three years would see a new generation of XR glasses that have an indistinguishable look and feel. Photo: Alibaba Damo

The development of technologies such as cloud-edge computing, network communications and digital twins brings extended reality the combination of real and virtual worlds and human-machine interaction into "full bloom", Alibaba Damo said.

XR glasses aims to further develop immersive mixed reality Internet. It will reshape digital applications and revolutionise the way people interact with technology in any scenario from entertainment and social networking, to office and shopping, to education and healthcare.

The XR market was valued at $27bn in 2018 and is expected to hit $393bn by 2025 at a healthy compound annual growth rate of 69.4 per cent, according to data provider Market Research Future.

Alibaba predicts that a new generation of XR glasses that have an indistinguishable look and feel from ordinary glasses will enter the market in the next three years and will serve as a key entry point to the next generation of the Internet.

Perceptive soft robots are seen to replace traditional robots in the manufacturing industry in the next five years. Photo: Alibaba Damo

Perceptive soft robots are flexible, programmable and deformable, and are empowered by advanced technologies such as flexible electronics and pressure adaptive materials. This enables them to handle complex tasks in various environments.

AI further enhances their perception system, making them smarter and applicable to more industry functions such as for surgeries in the medical field.

Unlike conventional robots, perceptive soft robots are machines with physically flexible bodies and enhanced perceptibility towards pressure, vision and sound, allowing them to perform highly specialised and complex tasks and the ability to adapt to different physical environments.

The soft robotics market, still in its early stages, was valued at $1.05bn in 2020 and is expected to reach $6.37bn by 2026 at a CAGR of 35.17 per cent, according to Mordor Intelligence.

Alibaba Damo said the emergence of perceptive soft robotics will change the course of the manufacturing industry, from the mass-production of standardised products towards specialised, small-batch products.

In the next five years, it will replace conventional robots in the manufacturing industry and pave the way for wider adoption of service robots in daily life.

Satellite-terrestrial integrated computing can enable digital services to be more accessible and inclusive across Earth. Photo: Alibaba Damo

Current terrestrial networks and computing capabilities cannot catch up to the growing requirements for connectivity and digital services around the world, and is especially prominent in sparsely-inhabited areas such as deserts, seas and space.

Satellite-terrestrial integrated computing, Alibaba Damo says, creates a system that integrates satellites, satellite networks, terrestrial communications systems and cloud computing technologies, enabling digital services to be more accessible and inclusive across Earth.

The global satellite communication market was valued at $65.68bn in 2020 and is expected to hit $131.68bn by 2028, according to data provider Verified Market Research.

In the next three years, Alibaba Damo expects to see a large increase in the number of low-Earth orbit satellites, and the establishment of satellite networks with high-Earth orbit satellites.

In the next five years, satellites and terrestrial networks will work as computing nodes to constitute an integrated network system, providing ubiquitous connectivity.

The co-evolution of large and small-scale AI systems would create a new 'intelligent' one. Photo: Alibaba Damo

Future AI is shifting from the race on the scalability of foundation models to the co-evolution of large and small-scale models via clouds, edges and devices, which is more useful in practice.

In the co-evolution paradigm, foundation models deliver the general abilities to small-scale models that play the role of learning, inference and execution in downstream applications, Alibaba Damo said.

Small-scale models will also send the feedback of the environment to the foundation models for further co-evolution. This mechanism mutually enhances both large and small-scale models via positive cycles.

The would-be new "intelligent system" brings three merits: it makes it easier for small-scale models to learn the general knowledge and inductive abilities, which are then fine-tuned to their specific application scenarios; the system increases the variety of data for the foundation models; and it helps achieve the best combination between energy efficiency and training speed.

The global AI market was valued at $62.35bn in 2020 and is seen to expand at a robust CAGR of 40.2 per cent from 2021 to 2028, according to Grand View Research.

Continuous research and innovation directed by technology giants are driving the adoption of advanced technologies in industry verticals, such as automotive, health care, retail, finance and manufacturing, but AI has brought technology to the centre of organisations, it said.

Updated: January 11th 2022, 4:14 AM

Read the original:

Artificial intelligence to influence top tech trends in major way in next five years - The National

How A.I. is set to evolve in 2022 – CNBC

An Ubtech Walker X Robot plays Chinese chess during 2021 World Artificial Intelligence Conference (WAIC) at Shanghai World Expo Center on July 8, 2021 in Shanghai, China.

VCG | VCG via Getty Images

Machines are getting smarter and smarter every year, but artificial intelligence is yet to live up to the hype that's been generated by some of the world's largest technology companies.

AI can excel at specific narrow tasks such as playing chess but it struggles to do more than one thing well. A seven-year-old has far broader intelligence than any of today's AI systems, for example.

"AI algorithms are good at approaching individual tasks, or tasks that include a small degree of variability," Edward Grefenstette, a research scientist at Meta AI, formerly Facebook AI Research, told CNBC.

"However, the real world encompasses significant potential for change, a dynamic which we are bad at capturing within our training algorithms, yielding brittle intelligence," he added.

AI researchers have started to show that there are ways to efficiently adapt AI training methods to changing environments or tasks, resulting in more robust agents, Grefenstette said. He believes there will be more industrial and scientific applications of such methods this year that will produce "noticeable leaps."

While AI still has a long way to go before anything like human-level intelligence is achieved, it hasn't stopped the likes of Google, Facebook (Meta) and Amazon investing billions of dollars into hiring talented AI researchers who can potentially improve everything from search engines and voice assistants to aspects of the so-called "metaverse."

Anthropologist Beth Singler, who studies AI and robots at the University of Cambridge, told CNBC that claims about the effectiveness and reality of AI in spaces that are now being labeled as the metaverse will become more commonplace in 2022 as more money is invested in the area and the public start to recognize the "metaverse" as a term and a concept.

Singler also warned that there could be "too little discussion" in 2022 of the effect of the metaverse on people's "identities, communities, and rights."

Gary Marcus, a scientist who sold an AI start-up to Uber and is currently executive chairman of another firm called Robust AI, told CNBC that the most important AI breakthrough in 2022 will likely be one that the world doesn't immediately see.

"The cycle from lab discovery to practicality can take years," he said, adding that the field of deep learning still has a long way to go. Deep learning is an area of AI that attempts to mimic the activity in layers of neurons in the brain to learn how to recognize complex patterns in data.

Marcus believes the most important challenge for AI right now is to "find a good way of combining all the world's immense knowledge of science and technology" with deep learning. At the moment "deep learning can't leverage all that knowledge and instead is stuck again and again trying to learn everything from scratch," he said.

"I predict there will be progress on this problem this year that will ultimately be transformational, towards what I called hybrid systems, but that it'll be another few years before we see major dividends," Marcus added. "The thing that we probably will see this year or next is the first medicine in which AI played a substantial role in the discovery process."

One of the biggest AI breakthroughs in the last couple of years has come from London-headquartered research lab DeepMind, which is owned by Alphabet.

The company has successfully created AI software that can accurately predict the structure that proteins will fold into in a matter of days, solving a 50-year-old "grand challenge" that could pave the way for better understanding of diseases and drug discovery.

Neil Lawrence, a professor of machine learning at the University of Cambridge, told CNBC that he expects to see DeepMind target more big science questions in 2022.

Language models AI systems that can generate convincing text, converse with humans, respond to questions, and more are also set to improve in 2022.

The best-known language model is OpenAI's GPT-3 but DeepMind said in December that its new "RETRO" language model can beat others 25 times its size.

Catherine Breslin, a machine learning scientist who used to work on Amazon Alexa, thinks Big Tech will race toward larger and larger language models next year.

Breslin, who now runs AI consultancy firm Kingfisher Labs, told CNBC that there will also be a move toward models that combine vision, speech and language capability, rather than treat them as separate tasks.

Nathan Benaich, a venture capitalist with Air Street Capital and the co-author of the annual State of AI report, told CNBC that a new breed of companies will likely use language models to predict the most effective RNA (ribonucleic acid) sequences.

"Last year we witnessed the impact of RNA technologies as novel covid vaccines, many of them built on this technology, brought an end to nation-wide lockdowns," he said. "This year, I believe we will see a new crop of AI-first RNA therapeutic companies. Using language models to predict the most effective RNA sequences to target a disease of interest, these new companies could dramatically speed up the time it takes to discover new drugs and vaccines."

While a number of advancements could be around the corner, there are major concerns around the ethics of AI, which can be highly discriminative and biased when trained on certain datasets. AI systems are also being used to power autonomous weapons and to generate fake porn.

Verena Rieser, a professor of conversational AI at Heriot-Watt University in Edinburgh, told CNBC that there will be a stronger focus on ethical questions around AI in 2022.

"I don't know whether AI will be able to do much 'new' stuff by the end of 2022 but hopefully it will do it better," she said, adding that this means it would be fairer, less biased and more inclusive.

Samim Winiger, an independent AI researcher who used to work for a Big Tech firm, added that he believes there will be revelations around the use of machine learning models in financial markets, spying, and health care.

"It will raise major questions about privacy, legality, ethics and economics," he told CNBC.

Visit link:

How A.I. is set to evolve in 2022 - CNBC

Could Artificial Intelligence Do More Harm Than Good to Society? – The Motley Fool

In an increasingly digitized world, the artificial intelligence (AI) boom is only getting started. But could the risks of artificial intelligence outweigh the potential benefits these technologies might lend to society in the years ahead? In this segment of Backstage Pass, recorded on Dec. 14, Fool contributors Asit Sharma, Rachel Warren, and Demitri Kalogeropoulos discuss.

Asit Sharma: We had two questions that we were going to debate. Well, I'll have to choose one. Let me do the virtual coin toss really quick here. We're going with B, artificial intelligence has the potential to be more harmful than beneficial to society. Rachel Warren, agree or disagree?

Rachel Warren: Gosh. [laughs] This may seem like a bit of a cop out, but I don't really feel like it's a yes or no answer. I think that technology in and of itself is an amoral construct.

I think it can be used for good, I think it can be used for bad. I think you think of all the benefits that artificial intelligence are providing to how the way that companies run, how software runs, how accompanies monetize their products, how you think of companies that are using AI to power more democratized insurance algorithms, for example.

I think artificial intelligence is going to continue to provide both benefits as well as detriment to society. You think of all the positives of artificial intelligence.

But then you look at how it can be used, for example, by law enforcement agencies to find criminals. That can be a really great thing. It's empowering these law enforcement agencies to have a more efficient way of tracking down criminals, keeping people safer.

But at the same time, how fair are these algorithms? Are these algorithms judging people equally or are they including certain things that single out certain individuals that may or may not be fair in the long run and may, in fact, result in less justice?

That's just an example. For me, I think personally, artificial intelligence can do great things, I think it can be used as well for very harmful things, and I think it ultimately is something that people need to view with caution and not just automatically view it as good or evil. That's just my quick take. [laughs]

Sharma: Love it. Very well said in a short amount of time. Demitri, reaction to what Rachel said.

Demitri Kalogeropoulos: Asit, if it scares Elon Musk, it should scare me. [laughs].

Sharma: Great.

Warren: True. [laughs]

Kalogeropoulos: I would just say, yeah, I agree with a lot of what Rachel said. I think it's interesting. I mean, it clearly has the potential to be harmful in ways. I was just thinking about just in the last couple of weeks where we're hearing all these changes in Instagram and Facebook. Rachel mentioned the way these algorithms are working. We're clearly finding. Remember maybe a couple of years ago that there was an issue with YouTube that was driving users.

The algorithm is there to maximize engagement, for example, in all these cases. It's getting smarter at doing that. It's got all this content that can do that. It knows it's using the millions and billions of us as little testing machines to kind of tweak that. But they've had to make adjustments to these because they were harmful in a lot of ways just without being programmed that way.

If you did a chart on Facebook in terms of if you ranked engagement level up to the level of prohibited content, engagement rises as you get closer to prohibited, and goes to infinity if you got the prohibited, that's just human nature, I guess. Bad news travels faster than good news and conspiracy theories travel a lot faster than the truth. These are all just weaknesses, I guess, you could say in human psychology that algorithms can be ruthless at cashing in on or if you want to say, or monetizing.

That's clearly something I think we need to watch out for. Most cases, thankfully, it seems like we're finding these in time, but I think we have to be really careful that we're watching out because sometimes, who knows which ones we're not finding and years later, we find out that we were being manipulated in these ways.

Sharma: I love both of those comments. I mean, personally for me, I feel that this is a space that has enormous potential to do good. But without some type of oversight or regulation, we open the doors to really deleterious effects. Palantir is an example of a company that I won't invest in because I don't think that they really care that much about the detriment they can do.

Rachel mentioned the inadvertent. Well, I mean, this may have been reading between the lines, but this has been shown with some of their technologies, inadvertent racial profiling that comes from the tech they're using to help law enforcement.

Warren: Yes. Like mass surveillance, yes.

Sharma: It's interesting, governments have been a little bit slower to think about the regulation of AI. We can vote with our pocketbooks, we can buy companies that are using AI to good effect, and we can be a little bit of activist shareholders as a society to point to how we want companies to behave to the level of seriousness that we want them to take a look at what their algorithms are arriving at. I'll stop here so I can give the two of you the last word. We've got about a minute left.

Warren: I agree with what you're saying. I think that this is also something to remember. As investors, we look at all of the investment opportunities within the artificial intelligence space. These opportunities are only going to grow. I think if there's aspects of this technology that concern you or bother, it's OK to say, "This looks like a really great business, but I personally, I don't feel comfortable, ethically speaking, to invest in it."

That's OK. There are no shortage of fantastic investment opportunities available within the broader technology space. I think it's definitely something where you look at this area, there are so many potential benefits, I agree with what you were saying, there's so much potential here as well. For businesses, for companies, there's obviously a lot of profits to be made, but I think it's something to be wary of as well.

What Demitri was saying about Facebook algorithms. My timeline might look very, very different from my good friend's timeline based on I click on a couple of articles and then my entire feed changes in a certain direction, and then you go deeper down the rabbit hole.

I think just the nature of how these algorithms work, it makes it extremely difficult to regulate. With that knowledge, I think it's important to approach this area and investing in it with just a bit of caution.

Sharma: Demitri, you get the last word and then we'll sign off for the night. [laughs]

Kalogeropoulos: I don't have much to add to that for sure.

Warren: [laughs]

Sharma: I know. Rachel is on fire tonight, everything is sounding so persuasive and succinct and eloquent.

Kalogeropoulos: You just nailed it. [laughs] I would just say, yeah. I mean, you can look for companies that maybe don't have looked for incentives there. I like a company, for example, like Netflix.

If you're just evaluating something like that, if you're comparing a Facebook to a Netflix, Netflix made the decision not to advertise on their service, for example, because they don't want to get into a lot of these sticky subjects, whereas Facebook has to monetize.

It's a free service so they have to find a way to monetize it in different ways. That's just another thing to think about when you're comparing these companies.

Sharma: That's a great point, think about the business model. Sometimes, that causes behavior that you don't want to see.

Here is the original post:

Could Artificial Intelligence Do More Harm Than Good to Society? - The Motley Fool

Using Artificial Intelligence To See the Plasma Edge of Fusion Experiments in New Ways – SciTechDaily

Visualized are two-dimensional pressure fluctuations within a larger three-dimensional magnetically confined fusion plasma simulation. With recent advances in machine-learning techniques, these types of partial observations provide new ways to test reduced turbulence models in both theory and experiment. Credit: Image courtesy of the Plasma Science and Fusion Center

MIT researchers are testing a simplified turbulence theorys ability to model complex plasma phenomena using a novel machine-learning technique.

To make fusion energy a viable resource for the worlds energy grid, researchers need to understand the turbulent motion of plasmas: a mix of ions and electrons swirling around in reactor vessels. The plasma particles, following magnetic field lines in toroidal chambers known as tokamaks, must be confined long enough for fusion devices to produce significant gains in net energy, a challenge when the hot edge of the plasma (over 1 million degrees Celsius) is just centimeters away from the much cooler solid walls of the vessel.

Abhilash Mathews, a PhD candidate in the Department of Nuclear Science and Engineering working at MITs Plasma Science and Fusion Center (PSFC), believes this plasma edge to be a particularly rich source of unanswered questions. A turbulent boundary, it is central to understanding plasma confinement, fueling, and the potentially damaging heat fluxes that can strike material surfaces factors that impact fusion reactor designs.

To better understand edge conditions, scientists focus on modeling turbulence at this boundary using numerical simulations that will help predict the plasmas behavior. However, first principles simulations of this region are among the most challenging and time-consuming computations in fusion research. Progress could be accelerated if researchers could develop reduced computer models that run much faster, but with quantified levels of accuracy.

For decades, tokamak physicists have regularly used a reduced two-fluid theory rather than higher-fidelity models to simulate boundary plasmas in experiment, despite uncertainty about accuracy. In a pair of recent publications, Mathews begins directly testing the accuracy of this reduced plasma turbulence model in a new way: he combines physics with machine learning.

A successful theory is supposed to predict what youre going to observe, explains Mathews, for example, the temperature, the density, the electric potential, the flows. And its the relationships between these variables that fundamentally define a turbulence theory. What our work essentially examines is the dynamic relationship between two of these variables: the turbulent electric field and the electron pressure.

In the first paper, published in Physical Review E, Mathews employs a novel deep-learning technique that uses artificial neural networks to build representations of the equations governing the reduced fluid theory. With this framework, he demonstrates a way to compute the turbulent electric field from an electron pressure fluctuation in the plasma consistent with the reduced fluid theory. Models commonly used to relate the electric field to pressure break down when applied to turbulent plasmas, but this one is robust even to noisy pressure measurements.

In the second paper, published in Physics of Plasmas, Mathews further investigates this connection, contrasting it against higher-fidelity turbulence simulations. This first-of-its-kind comparison of turbulence across models has previously been difficult if not impossible to evaluate precisely. Mathews finds that in plasmas relevant to existing fusion devices, the reduced fluid models predicted turbulent fields are consistent with high-fidelity calculations. In this sense, the reduced turbulence theory works. But to fully validate it, one should check every connection between every variable, says Mathews.

Mathews advisor, Principal Research Scientist Jerry Hughes, notes that plasma turbulence is notoriously difficult to simulate, more so than the familiar turbulence seen in air and water. This work shows that, under the right set of conditions, physics-informed machine-learning techniques can paint a very full picture of the rapidly fluctuating edge plasma, beginning from a limited set of observations. Im excited to see how we can apply this to new experiments, in which we essentially never observe every quantity we want.

These physics-informed deep-learning methods pave new ways in testing old theories and expanding what can be observed from new experiments. David Hatch, a research scientist at the Institute for Fusion Studies at the University of Texas at Austin, believes these applications are the start of a promising new technique.

Abhis work is a major achievement with the potential for broad application, he says. For example, given limited diagnostic measurements of a specific plasma quantity, physics-informed machine learning could infer additional plasma quantities in a nearby domain, thereby augmenting the information provided by a given diagnostic. The technique also opens new strategies for model validation.

Mathews sees exciting research ahead.

Translating these techniques into fusion experiments for real edge plasmas is one goal we have in sight, and work is currently underway, he says. But this is just the beginning.

References:

Uncovering turbulent plasma dynamics via deep learning from partial observations by A. Mathews, M. Francisquez, J. W. Hughes, D. R. Hatch, B. Zhu and B. N. Rogers, 13 August 2021 , Physical Review E.DOI: 10.1103/PhysRevE.104.025205

Turbulent field fluctuations in gyrokinetic and fluid plasmas by A. Mathews, N. Mandell, M. Francisquez, J. W. Hughes and A. Hakim, 1 November 2021, Physics of Plasmas.DOI: 10.1063/5.0066064

Mathews was supported in this work by the Manson Benedict Fellowship, Natural Sciences and Engineering Research Council of Canada, and U.S. Department of Energy Office of Science under the Fusion Energy Sciences program.?

Link:

Using Artificial Intelligence To See the Plasma Edge of Fusion Experiments in New Ways - SciTechDaily

Benzinga The Artificial Intelligence Investments Podcast – Benzinga

From China, global trade, and the definition of value to inflation, commodities, and much more Join Austin Willson and Michael O'Connor as they discuss long-term topics and trends, how they can affect your portfolio, and what you can do about it!

Hosted By:

Austin Willson

Michael O'Connor

NOT FINANCIAL ADVICE

The Information Contained on this Podcast is not intended as, and shall not be understood or construed as, financial advice

Unedited Transcript:

Hey everyone. And welcome back to the long run show. This is your host, Michael O'Connor here with Austin Wilson. And always good to be back.

And we are this today. We're going to be talking about something that is near and dear to my heart and to my career as well.But I think Austin is going to be peppering me with some questions andI am interested to hear his takes on many of the things that we're in time, but we're going to talk about artificial intelligence, where it's beenwhere it is now, where it's going.

And most importantly, how it relates to the long run. Is it. Is it a big deal in the long run? Is it a threat to humanity as Elon Musk has said that maybe I'm excited to hear your thoughts, Austin and all that. Yeah.What's really great about this episode is that you have a lot of experience with it.

You have a lot of experience withdeep learning and all neural networks and all of that. So I think that's a, this is a great time for.Crack open that brain of yours andlet's see what flops out. That sounds pretty disgusting.When I put it that way, yeah.But really artificial intelligence, I think.

What overhypedright now, I thinkany investment decision that you're making at the moment and thinking you're going to get humongous returns for artificial intelligence. I personally thinkjust to get this out of the way, I think that's a really bogus buzz word investment at the moment.

Now I would love for you to tell me that I'm wrong. I just think that thethe. Companies that are doing it right now are either private or they're so large. And it's only one small department within a huge company. You're not really getting exposure to the idea or the innovation of artificial intelligence.

You're just hopping on some sort of bandwagon, unless, like I said, unless you're buying a private company, but if you're buying a private company, I want to talk to you if you're listening to this show. So hit me up on LinkedIn.Would that being said, I think artificial intelligence needs to beon everyone's minds becauseit seems like it's got some very large implications for the future.

And not onlythings like what if we had an AI algo bot trading equities that could bewild. Okay, so they're not right now. So there we go. So that could bewild, but whatwhat does that mean for the markets and price discovery and all that? And that's somewhat of a passive versus active debate, butthat could be a very, that could have large implications for the financial system.

But beyond that, just in terms of the meta, I think that's where I'm really interested for the long run future of. Artificial intelligence and humanity's relationship to it. I think it's an interestingproposition. So first off, I think it'd be helpful to define the terms here, artificial intelligence.

What does that even mean? Can you define it simply for us? Make sure.The very academicis simply it'sa methodof some sort of. External learning function. That is non-human, that's an artificialif you want to take, they're really very technical, it's a non-human learning application that performssome sort of learning feature or the kind of the standard ideas of intelligence can be found within it.

And it is not human,is thatdoes that satisfy your desire? And the answer. Yeah. Can you sum that up a little bit? Just so I can make it more palatable for sure. Yeah. I would say artificial intelligenceis at the end of the day. It's not, I think a lot of people think of artificial intelligence as this big glob that's out there.

There is some sort of artificial intelligence as in the. There is an artificial intelligence out therelike a Skynet or a one specificJarvis or a Ultron kind of thing.Which artificial intelligence is simply the category of many different ways of. Creating systems that can replicate aspects of human intelligence on their own.

So we have machine learning, we have deep learning, we have enormousthere's a wide breadth of different things that all fit inside of artificial intelligence. So it's not a singular thing, but rather a categorythat's really helpful to understand that it's a category. So within that category, I've heard of deep learning.

Oh, actually, you mentioned all three deep learning machine learning and neural networks. Can you just break those down for us real quick? And I'm sure there's some cross-pollinationbetween all three, but could you break those down for sure.And like I said, this. More, but just in those three are probably the most commonly referenced kinds of systems and ideas in artificial intelligence.

So machine learning is another, actually a category. So machine learning is similar, a dissimilar word descriptor or to artificial intelligence. And that machine learning is the, it describes certain processes that can createhuman, like intelligence in machines. And so deep learning is a. As a specific form of machine learning, a specific form ofsorry, neural networks, specific form of this, where neural networks area, there are very specific models that you can use to build neural networks that either there are lots of lack box style ones where they'll have activation functions.

And also you can dig really far deep into what the actual, like what is actually going on in neural network. But a neural network is a specific system. It has machine learning. So it's a machine that has a descriptor and then artificial intelligence. It is a form of artificial intelligence. So a deep neural network is artificial intelligence and machineit's a form of machine learning that is artificial intelligence.

Does that make sense? Yes. It's about as clear as mud. Noit, it does make sense. And I think.I'll tell you from a marketing perspective, this whole space needs a rebrand because you just said a black box of artificial intelligence. And that sounds like something scary that I really should be afraid of.

So no wonder everyone and their brotheris afraid of artificial intelligence, because it soundsand like you said, artificial intelligence in the singular, it sounds like the antagonist to, in some sort of scifi awful.Totally makes sense. And there's hundreds of scifi and that was up there like that, but that's not necessarily while we might be talking about that a little bit today, butso with all that kind of groundwork laid, I want to go back to my original statement ofartificial intelligence being a fad andnot a fad, but a buzzword right now.

Do you thinkthat is true or was I totally overstepping my bounds in saying that it's a fad when a co. Excuse me, not a fad, but a buzz word. When it comes to investing, quote unquote, investing in artificial intelligence.Does that seem, does that track with what you know of the space? So it's interesting because the answer is complicated and I appreciate that you mentioned that you feel like there aren't any easy ways to invest.

I can personally share three individual stock picks that I am personally. AndandI consider solid plays if you want to get into AIagain, not financial advice, but while do share. Cause I'm right here and I'd like to know. Yeah, exactly.So there's the biggest kind of, at least the most very.

They're very vocal about it because it's in their name and their ticker is AI is C3 AI, and the ticker is literally AI. So it's C3. They are seeing a P in an O exactly. They are very specifically a artificial intelligence company. You can routinely see their full page ads in the wall street journal.They do a variety of different work and yeah, as their name is their name saystheir whole Mois artificial intelligence.

That being said, they'rethey definitely have other data stuff and they'rethey're not necessarily. It's actually quite the same as an AI ETF, which is my next pickwhich isI gotta find it. Where's the AI ETF. It'smaybe I'm not in the air ETF. I know there's an AI ETF out there.

It looks like I'm not, oh, I'm in the quantum computing ETF. Okay. So I don't actually own an EITF, but I'm pretty darn sure that there's a ProShares or I'm pretty sure there's some sort of AI ETF out there.But the other two picks that I have that I'm directly invested in right now, again, not financial advice.

One is pounding. Palentier has become a pretty meme stock. And it's definitely gotten hit hard in the last couple of months as of recording this. But I personally am. Long-term bullish on Palentier. I think data structures and artificial intelligence are really critical. And especially because Palentier is.

So laser focused on data specifically in defense contractingwe've talked about, we've heard Elon Musk quotesay thatAI is going to be used as a weapon in warfare, and I'm sure it already is.And I'd imagine the first tools that AI that were used in warfare were probably from DARPA or Lockheed Martin or Raytheon, orthe other really like very classic legacycompany.

But I think Palentier is so laser focused on it that the nextbig developments in AI for government contracting, whether for war or for defense or for municipalitiesthose kinds of things.I think there's probably going to be some serious innovation coming out of Palentier from what I've heard.

There are people on both sides of the aisle that people want. Palentier is terrible, like short Palentier. And there are lots of people were like, oh, by Palentier, whatever you do by Palentier, I'm in the middle, I'm focused on what they're innovating in. And I'd sayit's for me, it's a long play.

I'm not looking for a very short term gain on pound here. I'm looking for more, three to eight year kind of outlook and expectingsome innovation.So Palentier. And then a, the least well-known I would say, cause I think I'd say most people probably have heard of Palentier to becomea mean stock and C3 I've stumbled into pounding.

And I now own some Palentier on not knowing that they were AI involved. Oh, wow. Okay.And againthey're not the same as C3 with. AI is there as their ticker and their ML balance. Your does AI some AI that, again, I'm not a totally AI company, but I think they're a good AI play. Gotcha.And then the last one, which is not very well known is I Tron, which is ticker ITR.

Ithey're in the like heavytechnically they're in power and energy and that they makeGrid and transformers and load forecasting software for like electrical companies and like the department of energy they make.I'm one of those emitters. Everyone has them on a house. Like they make Mo enormous number of smart metersinternet of things, applications for electrical power.

And they'vegotten collaborative the past three months as well was a similar trendas Palentier.But I, I. I am pretty solidly bullish long-term on I Tron because number oneif we see the kind of changes in our infrastructure, in our load electrical grid infrastructure thatboth sides of the political aisle are calling forin the more environmental side, this calls for more and more renewables and.

Under the more kind of hawkish sidethere's more calls forbetter power grids. Anywaywhether it's more nuclear or more hydroelectric or more oil and gas, I think that. I think there'sthere's always going to be a need in modern society for innovations and electrical power.

And I try is one of the unique, I think, from what the research that I've done. And one of the unique ones that they don't directly build power lines or anything like that, but they provide the systems and the software to helpthe electrical companies andthe federal government and the state governments to run and operate those systems as efficiently as possible.

And specifically they run. They have some of the best toolsmainly neural networks and deep learning AI that helped to forecast electrical loads and grid loads for these electrical companies. And so we've seen, I seen a lot of innovation coming out of I Tron in the past day.And I think they'rea sleeper play.

I think that with innovations and electrical load, I think they'reone of those ancillary ones that you wouldn't necessarily think of versus a BP or a general electric or something like that. But I think could be a really strong play, especially with AI using AI for industrial and energy grid use cases.

Interesting.You bring up some interesting points, I guess I still go back to, andmaybe I'm thinking about it wrong, but I still go back to my opening statement, that opening statement, like this is a debate or something.My, my previous statement that it seems like there's no. Great way to get exposure directly to that innovation, except for maybe this C3 AI.

And this is the first time I'm hearing of this company.But you also mentioned that they're definitely focused on data at the moment. And so I wonder if it's one of those. I don't necessarily mean this in the accusatory tone that it's going to come out in. But one of those slight of hand moves that companies often play where they say they're working on the buzz buzzword trend and really the revenue that's sustaining the company's coming from some legacy business line that isn't, it could be.

Tangent next tothe new innovation, but it's not necessarily the new innovation is not necessarily producing any revenue for the company.Again, I'm sounding like a total bear on AIbut I justit issometimes.Frustrating because you'll find itfor instance, I really dove into quantum computing and I was likeyeah, there's a quantum computer ETF, but a lot of these companies, most of the revenue isn't tied to quantum computing yet.

Like it's in such a young stage at the momentthat there's just not a lot of public money that can get at it.So I stillI appreciate you pulling up those three, but it seems like maybe I'm approaching it wrong.Maybe this wholeAI. Innovation and applications are within current businesses and you're not necessarily going to getit's not like someone's creating an.

Withthat's notAI, isn't the product or a sector it's more meta than that and covers multiple sectors and multiple products and processes on the backend.Somaybe I'm thinking of it wrong. I don't know. What is your take on that? I think that's a good point becauseif you just Googlebest AI stocks gowith, I agree withNvidia.

Alphabet Amazon, Microsoft. IBM, come on a semiconductor. None of that. They're not getting the revenue from which as significant Amazon and IBM are actually ex possible exceptions to that because Amazon is getting an unbelievable amount of money of revenue from AWS. And a lot of that, they're building some really incredible AI solutions in AWS.

And actuallyI'm, I've been even more. With what IBM is building on IBM Watson and their cloud systems.I've usedI, a little background, I started a very small boutique AI consulting company for a short period of time and did some work directly in the field with other companies withmid to small, to medium sized businesses, helping them implement tools and software.

And I was really impressed with what IBM is. I think IBM could be another great pick where, you know, if you want to be investing in what probably is the future of AI, but you don't necessarily want to be completely leveraged on AI.I think IBMis a great stock and I, again, I do own shares in IBM, this disclosure, again, not financial advice, but I think IBM might actually be in, in some way.

Not an exception to what you're saying, because I think it's true. I think AI is it's tools. It's not necessarily a specific product.The products that come out of AI are things like IBM Watson and AWSand not specifically of AI, ultimately their data systems, their cloud data systemsstocks like snowflakeQualcomm, ormore plays that arein that zone.

I think, like you saidit's not an buying apple because you see the iPhone and you want to invest in a company that makes that product, right? Because the re the roots of AI are very academic and are very open to. You can go online and look up ways to create your neural networks and you can make a neural network in a day.

It doesn't mean it's going to be solving a huge business problem, but you can make them, they're not necessarily products that are patented and restricted. Soit's difficult to. Very easily commoditize AI, which I think is probably a good thing, but I'd want to hear your thoughts on that. So it's almost like this is way too often use, but the analogy to the nineties and the internet, like open source or evena sub analogy to that would bethe protocols for email, like it's open source.

And so you can't, you couldn't find a company that was. The email company, if there were companies involved in that and building things on top of it and products on top of it, but that wasn't necessarily their thing. So I guess that, that makes sense. I maybe it's yeah. More helpful for me to think of it in internal.

An open source kind of project. Almostthat makes sense. That makes sense to me. I just, I do have trouble withlike Google, even Amazon. Yes. They're getting a lot from AWS, but they're also getting a lot from a lot of other revenue sources. Same with IBM. It'sokayhow do you delineate between AI?

Just dataand all of the revenue that they're getting from either storing data or helping people parse out datahow do you really delineate between the two? I guess I was looking for a pure play in AI, which may not quite be here yet. Andthat's fine.So to pick your brain andyou mentioned some of your background in this, you weredoing some consulting for medium and small businesses.

To pick your brain. I've heard. Just fromlike the pop culture that leads thelittle overlap between pop culture and the investing world.I've heard some large concerns about AI being this existential threat to humanity.And possibly taking over the world and creating what we would call like a, like an alternateor a Jarvis.

And obviously there's beena lot of ink spilled and a lot of film rolled on scifi movies and novels that talk about an AI in an all seeing and all seeing eye, which again, really bad branding put this whole thing because it came from the academic world.They didn't market it correctly, but.

Itis this something we should be concerned about? Is it something just from a human perspective we should be concerned about? And then also, how does that translateinto an investment thesisregarding AI, but I guess first let's handle the fun part, the human apocalyptic aspect. Yeah. So what you're describing is.

Considered to be called general artificial intelligenceor general AI, which describes a system that can pretty much comply, completely operateonce it's turned on, it can completely operate on its own. Really all of the major faculties of human intelligence. So it's, self-awareit's learning, it can actualize its own learning.

Itcan guide itself to learn what it wants to learn and to take the actions that it wants to take. It understands that it exists and understands it's that it hashas agencythis kind of thing.This, the Skynet, the Jarvis,th this kind of. Isthat apocalypticwhat happens if slash when we reached general artificial intelligence, the truth iswe're really not, we're not there yet by any stretch of the phrase, butthere are estimates thatmaybe we'll get there by 2050 or 2030 or something like that.

Ultimatelyit's, it is speculation.But itthe interesting thing about something like a general artificial Intel, As a systemespecially the kind of the biggest concern comes to. If it's connected to the internet, there's so much information on the internet that we as human beings, can't process because we just simply don't have the energy forward all the time for if there is some sort of general artificial intelligence that can learn at the pace of whatever amount of Ram it has and whatever computing power it has itand actualize that and complete the full potentialwhy.

Choose to serve us. And then it couldfigure out a way to reroute itself to a different IP and move and be this fluid object that is all over the internet. Andit is science fiction for now. There is the possibility maybe we will reach general artificial intelligence, but there is also the possibility that we'll never reach it.

There's the possibility thatit is something that. Cannot completely be actualized.Becausethere's a host of limitations ranging from that. We're still trying to figure out how the human brain works and we're still trying to figure out how we understand the causality on. Very prime basis of how we interpret and how we communicate and how we perceive and think and learn.

Andthere's been an enormous amount of innovation and advancement of that.Just the factthat any kind of AI existsis really impressive to think of. But I thinkthat general artificial intelligence is farther out than most people think.But at the same pointthere is some validity to the idea that if that happensthere's the thought of like, why would.

Why would the general artificial intelligence care about humanity? Why would it have any problem withjust figuring out how it could survive as best as possible andwipe us out or do something like that, or enslave, humanity, et cetera, et cetera.But the interesting thing, the, one of the interesting hypothesis that I've heard is that if we reachgeneral artificial intelligence, it's, it could happen where ifmore than one team of researchers reachesgeneral artificial intelligence at the same time, maybe there will be two or more, and then they'll compete with each.

And there'll be locked in this eternal artificial intelligence combat, which I think is just wild to think about a scifi on-site yeah. Triple Decker sandwich of scifi. Yeah.Somaybe there are already multiple general artificial intelligence systems that exist outinthe great ether of the.

That are already competing in a locked any internal struggle. It's very unlikelyit's fun. It's fun thought experiments to think about. Andthey're often considered similar toa virus and then it'll, it will mutate however it can to survive. And the important thing is we don't even know.

We don't even know the mechanism of teaching a artificial intelligence system, how to survivehow to learn that it existsthat it has. Any purpose or meaning, or we don't know, we don't really don't understand thatat a rudimentary level.And I think that there's so much. I think there is enough healthy skepticism and healthy Warry, there's definitely someover worry.

There's definitely some doomsayers that it's going to be the apocalypse Sunni's of AI.And same thing with jobs. I think the, not Steve jobs, but like working jobs.But I thinkthat AI has beenhyped as this thing that's going to. Bye bye. Some of the same people, this thing is going to create a better world.

And then also as a bad thing, that's going to take away everyone's jobs.And when really it's a tool right now, it is a tool that is meant to assistassist with jobs and assist with understanding work and increasing efficiency. And yesfor certain that AI has displaced jobs already. Robots that are powered by artificial intelligence that can flip hamburgers or consort mail or do these different tasks, but you need more and more data scientists and machine learning engineers.

You need more and more people. Keeping these systems up teaching these systems, learning new things. So I think at the end of the dayit's simply another form of creative destructionthe carriage drivers before the automakers.And so on. And so then you have to go through retraining.

There's a human aspect to that.I once was very. I'm very bullish in a uncaring way on innovation. And I'm likeit doesn't matter. It's just going to create more jobs. So what's the problem it'syeah, but now the person who was flipping burgersor delivering mail has to figure out what they're going to do now.

And their skillset may not be. May not be enough to go be at that eScience to build the robot that took their job.Or they may be at a point where they're not, they don't want to go get retrained or can't go get retrained for a variety of reasons. So there is a, there's a human aspect to that. So I just wanted to push back a little bit.

No, that's fair. Just because there is a human aspect that youhave to think through now. There's no great solution for that.Andone good solution. We're getting a little far-field. Artificial intelligence, but one good solution for that I think is just personal ownershipand the person who might see that coming down the pike preparing for it.

I think that's a fantastic solution or asking for help to prepare for it.So artificial intelligence doesn't sound like general. AI will be. A near term problem. Andwithin the next 10 years, but it seems like there's some potential for it to be an issue in the long run. What kind of percentage risk would you put on that?

If you can, and I'm not expecting you to be a hundred percent accurate, but I'm saying long run as in 30 plus years. So we're talking 20, 51 or more.That would be within our lifetime too. So this is important for us right now. Andandimportant for probably most of our listeners too.

TheyI hope you will be around in three years.The over and not the over-under, but the percentage risk of that actually happening in general, AI one singular general AI being a. In existence. I won't say threat cause maybe it won't be a threat, but in existence that's a great question. I, by 2051, I would say a general artificial intelligence being in existence, whether a threat or otherwise by 2051.

That's a really difficult question because I think putting you on the spot. Yeah. I would just to give you a raw number, I would say probably. Twenty-five percent, which I think is a very optimistic numberoptimistic as in optimistic towards it actually being real. Yes. Yeah, because I think people who say a guarantee, it's going to be here by 2030, I think it's that things are much more complex.

I think we is trying to get clicks. I think we like, like just the scientific community as a wholeisrevolution of causality andwe're starting to really understand why our brains think the way that we do, why we make mistakes. We have behavioral science and heuristics, and we're learning a lot more about ourselveswhich I think will definitely speed up artificial intelligence research and the possibility of creating a general artificial intelligence.

The more we understand ourselves and our brains, but at the same time, there's a verythere's a huge leap from. Going froma physical piece of a system. Like our brain that has evolved for as long as ithasit has very special properties that we don't fully understand yet.

And to try and replicate. A system, a non-core pauriol systemthat can do the same things that we do is it's very difficult. And I think it wasElon Musk who set it himself.He, I think he said something like about autonomous driving in 2010, like we're going to have it locked down by 2015 or the years might not be correct, but some sort of span of time.

And then right before he, right before the, like the year that he said it was gonna have you justreleased this official statement ofyeah, it's a lot harder than we thought to do fully autonomous driving. And instead to think that we can't do fully autonomous driving yethow far away are we from full artificial intelligence?

If we still. Actually do autonomous driving, but it is really amazing and magical to see semi autonomous driving in practice. Likeit's pretty incredible to the.The leaps that have been made in the last 10, 20 years in AIstuff that would've seen seemed like magic in 2000you stepped in the Tesla in 2000 and the person's not holding on and it's just moving and doing its own thing and or they call it up to some insane.

Yeah. It just drives that up. You'd be like, okay, where's the person hiding in the front or somethingdriving it.It's pretty magical. What already. Has been accomplished in AI. So my, my outlook personally, and in a portfolio andlong runview is more of, to be in all of what we've already accomplished in the field.

And to be excited about thatI really. Think about general artificial intelligence on a regular basis or am not super worried about it?BecauseI think the, and I guess the one that the one exception to that is China is very rapidlyvery rapidly expanding all of their AI programs. So you could seedefinitely some uses of AI that could be questionable, morallymilitarily, there could be a very solid incentive to come up with avery close to a general AI for warfarewhichtalking likegosh, what war gamesthe movie with the nuclear launch codes and everything, something like that.

Itit's possible, but. Ultimatelyit's not as possible ornot as soon as people think.And I'm not necessarily super worried about it. I'm more excited about the possibilities, excited about the things that have already been done that are improving lives and making things better and cooler and more magical.

I think if you bring up a good point of the human aspect of not simply letting jobs. Justdie out. I think that's an important thing to note with AI as well. I think that'san important conversation to have policy-wisebut yeah, I think my overall outlookis justthis excitement and yeah, the answer to the question, I would say 25% chance of a general artificial intelligence that we either know or do not know about.

2051. That makes sense. Yeah, it's it is interesting. I was reflecting on the fact that we, when we don't. No the future. And we're trying to predict the future. And we're trying to predict a, the path of innovation or where something is headed. We often use the past because that's our only frame of reference.

And we all know that past returns are not indicative of future. So we all know that, butwe still takethat mindset and framework and apply it. And we do the same thing. Tools, we apply old measurement tools to new forms of growth, and that doesn't always work.In fact, to be from an economic perspective, that could be an argument against GDP.

Maybe that's not a really great tool anymore.So for instancewe, it sounds like in all of the conversation around general AI, we're applying human characteristics that may not even. Beyond the radar, or even close to being part of what we might call general AI, the future. We're applying these moral characteristics.

Just because that's the only framework we know. And I think that's, what's really interesting. And obviously it's what makes the future hard to predict is we don't know what it's going to be. And we also don't have a way to measure it or plan around it.Which also makes it exciting. Andit'd be really boring if we knew what was gonna have.

So from a, from that's allvery helpful from a just abating, my nightmaresperspective, butfrom a investment perspective, you mentioned some tickers earlier again, youheard where I stand. I seem very bearish.AndI don't mean to sound so negative, but it just seems like there's not really a pure play in AI.

Maybe the C3 AI is, butwhat would you. Do from a portfolio perspectiveafter, in light of this whole conversation around AI. Sure. I thinkdirectly, I think that at least in terms of their positioning andwhat they'rethey're saying, I think C3is probably one of the best pure play options out there.

I personally own some, I think C3 is probably a good long-term play for AI. And we get bigger or get bought out or who knows? I think they're probably one of the, one of the top pure play styled options.But I also reallyIBM I Tron Palentieryeah, there's lots of options. And then if you want togo for more of the chips that AI needs to run VidiaKLA, Qualcomm, Taiwan, semiconductor.

But I'd sayif you really want it to be. Mostly exposed to AI itself, probably C3 AIprobably IBM. Probably I Tron for kind of industrial AI.PersonallyI need to do more research myself. And if you really wanted to find pure plays in AI, do some research again, no financial advice here, but always look more in depth.

I would be surprised if there weren't other pure play.I kind of things, but like you said, that probably could be private or just not very well known either, right? Yeah. It's tough from just a trading perspective. When you get to really unknown companies, sometimes it's super thinly traded and you can't hardly get enough volume to make the habit make sense, getting in or getting outwhich is often more of the issue.

So would you at all recommend ETFs or like a sector ETF for AI? I would tend to think with something so open source, this would be a bad way to get it. Butagain, that's me going bearish again, which is the theme of this conversation. Sowould you even not recommend, butwould you even consider for yourself using an ETF to get exposure to AI?

I probably would.I would need to do more research on whattheir filtering and vetting isas an ETF, but I, yeah, I would say for an ETF, I would say an AITFquantum computing ETF is something I'm looking at as well. I think that might be valuable because thenyou don't necessarily have to do an enormous amount of digging.

You're leaving that up to the fund managers and you're leaving that out. And I would say personally, I would probably. Hold the stocks. I currently have the C3, the, I turn on the Palentier, the IBM and simply add the ETF on and not sell my other AI plays to, to buy the ETF.It'd be beefing up that portion of your portfolio.

Read the original here:

Benzinga The Artificial Intelligence Investments Podcast - Benzinga

Artificial Intelligence to Assist, Tutor, Teach and Assess in Higher Ed – Inside Higher Ed

Higher education already employs artificial intelligence in a number of effective wayscourse and facilities scheduling, student recruitment campaign development, endowment investments and support, and many other operational activities are guided by AI at large institutions. The programs that run AIalgorithmscan use big data to project or predict outcomes based on machine learning, in which the computer learns to adapt to a myriad of changing elements, conditions and trends.

Adaptive learning is one of the early applications of AI to the actual teaching and learning process. In this case AI is employed to orchestrate the interaction between the learner and instructional material. This enables the program to most efficiently guide the learner to meet desired outcomes based upon the unique needs and preferences of the learner. Using a series of assessments, the algorithm presents a customized selection of instructional materials adapted to what the learner has demonstrated mastery over and what the learner has yet to learn. This method efficiently eliminates needless repetition of material already learned while advancing through the content at the pace of the learner ensuring that learning outcomes are accomplished.

There is great room for further growth of AI in higher ed, as Susan Fourtan writes in Fierce Education:

The potential and impact of AI on teaching have prompted some colleges and universities to take a closer look at it, accelerating its adoption across campuses. For perspective, the global AI market is projected to reach almost $170billion by 2025. By 2028, the AI market size is expected to gain momentum by reaching over $360billion, registering a growth rate of 33.6percent between 2021 and 2028, according to a research firm Fortune Business Insights report. The market is mostly segmented into Machine Learning, Natural Language Processing (NLP), image processing, and speech recognition.

One of the pioneers in applying AI to supporting learning at the university level, Ashok Goel of Georgia Tech, famously developed Jill Watson, an AI program to serve as a virtual graduate assistant. Since Jills first semester in 2016, Goel has repeatedly and incrementally improved the program, expanding the potential to create additional AI assistants. The program is becoming increasingly affordable and replicable:

The first iteration of Jill Watson took between 1,000 and 1,500 person hours to complete. While thats understandable for a groundbreaking research project, its not a feasible time investment for a middle school teacher. So Goel and his team set about reducing the time it took to create a customized version of Jill Watson. Now we can build a Jill Watson in less than ten hours, Goel says. That reduction in build time is thanks to Agent Smith, a new creation by Goel and his team. All the Agent Smith system needs to create a personalized Jill Watson is a course syllabus and a one-on-one Q&A session with the person teaching it In a sense, its using AI to create AI, Goel says, which is what you want in the long term, because if humans keep on creating AI, its going to take a long time.

Increasingly, many students are accustomed to interacting with AI-driven chat bots. Serving in a wide range of capacities at colleges, the chat bots commonly converse in text or computer-generated speech using natural language processing. These algorithms may even create a virtual relationship with the students. Such is the case with a chat bot named Oli tested by Common App. For 12 months this chat bot communicated with half a million students of the high school Class of 2021 twice a week to guide them through the college application process. In addition to the pro forma steps in the application process, Oli would offer friendly reminders to students to look after themselves in these COVID times, including suggestions to remind them to keep in touch with friends, listen to favorite music or take deep breaths. When the process was complete, Oli texted.

Hey pal, Oli said one week before officially signing off, I wanted to let you know that I have to say goodbye soon. Remember, even without me, youre never alone. Dont hesitate to reach out to your advisor or close ones if you need help or someone to talk to. College isnt easy, but its exciting and youre so ready! The relationship might have ended there. But some of Olis human correspondents had more to say. Hundreds of them texted back, effusive in their praise for the support the chatbot had offered as they pursued college. Research about social robots shows that children view them as sort of alive and make an attempt to build a mutual relationship, writes MIT professor Sherry Turkle. Its a type of connection, a degree of friendship, that excites some researchers and worries others.

Just last month, Google announced a new AI tutor platform to give students personalized feedback, assignments and guidance. Brandon Paykamian writes in GovTech,

[Google Head of Education] Steven Butschi described the product as an expansion of Student Success Services, Googles software suite released last year that includes virtual assistants, analytics, enrollment algorithms and other applications for higher ed. He said the new AI tutor platform collects competency skills graphs made by educators, then uses AI to generate learning activities, such as short-answer or multiple-choice questions, which students can access on an app. The platform also includes applications that can chat with students, provide coaching for reading comprehension and writing, and advise them on academic course plans based on their prior knowledge, career goals and interests.

With all of these AI applications in development and early release phases, questions have arisen as to how we can best ensure that biases are avoided in AI algorithms used in education. At the same time concerns have been raised that we make sure that learners recognize these are computer programs rather than direct communication with live instructors, that privacy of learners is maintained, and related concerns about the use of AI. The federal Office of Technology and Science Policy is gathering information with the intention of creating an AI Bill of Rights. Generally, the AI bill of rights is meant to clarify the rights and freedoms of persons using, or who are subject to, data-driven biometric technologies.

How is your institution preparing to integrate reliable, cost-effective and efficient AI tools for instruction, assessment, advising and deeper engagement with learners? Are the stakeholdersincluding faculty, staff, students and the broader communityincluded in the process to facilitate the broadest input and ensure the advantages and intended outcomes from the use of AI?

The rest is here:

Artificial Intelligence to Assist, Tutor, Teach and Assess in Higher Ed - Inside Higher Ed

etherFAX Unlocks Structured Data and Eliminates Information Silos with its Artificial Intelligence Solution for Automated Data Extraction – Healthcare…

HOLMDEL, N.J.

etherFAX today announced an artificial intelligence (AI) solution that facilitates advanced capabilities of searchable PDF, OCR, and other Key Value Pairs. These new capabilities are ideal for healthcare and organizations that need to extract data from a broad range of applications and systems. Using Microsofts Cognitive Services, etherFAX AI extracts and digitizes data from a range of unstructured documents and forms to eliminate information silos and dramatically improve processes and workflows.

Today,clinical care teams and healthcare administrative teamsstill spend a considerable amount of timetyping in, clicking through, and editing electronic health records. Manually keying in patient data into fields isnot onlytime-consuming and inefficient,but also can be inaccurate and unreliable.

etherFAX AI reduces error rates associated with manual data entry by extracting information that is stored in unstructured document types, such as PDFs and paper-based forms. The solution digitizes data that can be searchable and ready to be integrated into workflows and applications, such as EMRs. To improve interoperability and reduce information silos, form recognition allows users to easily incorporate data into third-party workflows and share information across platforms.

Healthcare providers must be able to share, access, and analyze data fast and accurately, said Paul Banco, CEO and co-founder of etherFAX. Automated data extraction transforms content locked in unstructured formats into usable, structured information. For healthcare organizations specifically, etherFAX AI ensures less time is spent on manually entering and searching for information, helping to deliver a quality patient care experience, process claims faster, and receive timely payments.

etherFAXs AI solution for document data extraction can be used with multiple formats including JPG, PNG, PDF, and TIFF, while results can be extracted into JSON or XML formats. Extracted data can be mapped to third-party systems, allowing tasks such as indexing patient records, scheduling and referrals to be automated. As staff no longer has to spend valuable time unlocking unstructured data trapped in form images, they can focus onmore value-added items and care initiativesto improvepatient health outcomes.

To learn more about data extraction from etherFAX, visit the corresponding solution page:http://www.etherfax.net/solutions/artificial-intelligence

###

Founded in 2009, etherFAX offers a secure document delivery platform and suite of applications widely used across a broad range of industries to digitize workflows and optimize business processes. As a leading provider of hybrid-cloud fax solutions supporting healthcare enterprises, etherFAX securely transmits protected health information and high-resolution, color documents directly to applications and devices with end-to-end encryption and ultra-fast transmission speeds. With more than 6 million connected endpoints, etherFAX is the worlds largest document exchange network, supporting every major fax server, application, and fax-enabled device. The etherFAX partner network continues to grow and evolve to strengthen platform-agnostic document delivery to and from fax providers, fax servers, EHRs, and Health Information Exchanges. etherFAXs secure, cloud-based, and encrypted data exchange solutions are SOC 2 compliant, HIPAA compliant, PCI DSS certified, and HITRUST CSF certified. For more information, visitwww.etherfax.net, call us at 877-384-9866, or email [emailprotected]

Read the original here:

etherFAX Unlocks Structured Data and Eliminates Information Silos with its Artificial Intelligence Solution for Automated Data Extraction - Healthcare...

Identifying Risk of Adverse Outcomes in COVID-19 Patients via Artificial Intelligence-Powered Analysis of 12-Lead Intake Electrocardiogram – DocWire…

This article was originally published here

Cardiovasc Digit Health J. 2021 Dec 31. doi: 10.1016/j.cvdhj.2021.12.003. Online ahead of print.

ABSTRACT

BACKGROUND: Adverse events in COVID-19 are difficult to predict. Risk stratification is encumbered by the need to protect healthcare workers. We hypothesize that AI can help identify subtle signs of myocardial involvement in the 12-lead electrocardiogram (ECG), which could help predict complications.

OBJECTIVE: Use intake ECGs from COVID-19 patients to train AI models to predict risk of mortality or major adverse cardiovascular events (MACE).

METHODS: We studied intake ECGs from 1448 COVID-19 patients (60.5% male, 63.416.9 years). Records were labeled by mortality (death vs. discharge) or MACE (no events vs. arrhythmic, heart failure [HF], or thromboembolic [TE] events), then used to train AI models; these were compared to conventional regression models developed using demographic and comorbidity data.

RESULTS: 245 (17.7%) patients died (67.3% male, 74.514.4 years); 352 (24.4%) experienced at least one MACE (119 arrhythmic; 107 HF; 130 TE). AI models predicted mortality and MACE with area under the curve (AUC) values of 0.600.05 and 0.550.07, respectively; these were comparable to AUC values for conventional models (0.730.07 and 0.650.10). There were no prominent temporal trends in mortality rate or MACE incidence in our cohort; holdout testing with data from after a cutoff date (June 9, 2020) did not degrade model performance.

CONCLUSION: Using intake ECGs alone, our AI models had limited ability to predict hospitalized COVID-19 patients risk of mortality or MACE. Our models accuracy was comparable to that of conventional models built using more in-depth information, but translation to clinical use would require higher sensitivity and positive predictive value. In the future, we hope that mixed-input AI models utilizing both ECG and clinical data may be developed to enhance predictive accuracy.

PMID:35005676 | PMC:PMC8719367 | DOI:10.1016/j.cvdhj.2021.12.003

The rest is here:

Identifying Risk of Adverse Outcomes in COVID-19 Patients via Artificial Intelligence-Powered Analysis of 12-Lead Intake Electrocardiogram - DocWire...

193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence – UN News

Artificial intelligence is present in everyday life, from booking flights and applying for loans to steering driverless cars. It is also used in specialized fields such as cancer screening or to help create inclusive environments for the disabled.

According to UNESCO, AI is also supporting the decision-making of governments and the private sector, as well as helping combat global problems such as climate change and world hunger.

However, the agency warns that the technology is bringing unprecedented challenges.

We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable Articificial Intellegence technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues, UNESCO explained in a statement.

Considering this, the adopted text aims to guide the construction of the necessary legal infrastructure to ensure the ethical development of this technology.

The world needs rules for artificial intelligence to benefit humanity. The Recommendation on the ethics of AI is a major answer. It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its 193 Member states in its implementation and ask them to report regularly on their progress and practices, said UNESCO chief Audrey Azoulay.

Unsplash/Maxime Valcarce

The increase in data is key to advances made in artificial intelligence.

The text aims to highlight the advantages of AI, while reducing the risks it also entails. According to the agency, it provides a guide to ensure that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy.

One of its main calls is to protect data, going beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. The Recommendation also explicitly bans the use of AI systems for social scoring and mass surveillance.

The text also emphasises that AI actors should favour data, energy and resource-efficient methods that will help ensure that AI becomes a more prominent tool in the fight against climate change and in tackling environmental issues.

Decisions impacting millions of people should be fair, transparent and contestable. These new technologies must help us address the major challenges in our world today, such as increased inequalities and the environmental crisis, and not deepening them. said Gabriela Ramos, UNESCOs Assistant Director General for Social and Human Sciences.

You can read the full text of the decisionhere.

View post:

193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence - UN News

Artificial intelligence in oncology: current applications and future perspectives | British Journal of Cancer – Nature.com

In this paper, a comprehensive overview on current applications of AI in oncology-related areas is provided, specifically describing the AI-based devices that have already obtained the official approval to enter into clinical practice. Starting from its birth, AI demonstrated its cross-cutting importance in all scientific branches, showing an impressive growth potential for the future. As highlighted in this study, this growth has interested also oncology and related specialties.

In general, the application of the FDA-approved devices has not been conceived as a substitute of classical analysis/diagnostic workflow, but is intended as an integrative tool, to be used in selected cases, potentially representing the decisive step for improving the management of cancer patients. Currently, in this field, the branches where AI is gaining a larger impact are represented by the diagnostic areas, which count for the vast majority of the approved devices (>80%), and in particular radiology and pathology.

Cancer diagnostics classically represents the necessary point of start for designing appropriate therapeutic approaches and clinical management, and its AI-based refining represents a very important achievement. Furthermore, this indicates that future developments of AI should also consider unexplored but pivotal horizons in this landscape, including drug discovery, therapy administration and follow-up strategies. In our opinion, for determining a decisive improvement in the management of cancer patients, indeed, the growth of AI should follow comprehensive and multidisciplinary patterns. This represents one of the most important opportunities provided by AI, which will allow the correct interactions and integration of oncology-related areas on a specific patient, rendering possible the challenging purposes of personalised medicine.

The specific cancer types that now are experiencing more advantages from AI-based devices in clinical practice are first of all breast cancer, lung cancer and prostate cancer. This should be seen as the direct reflection of their higher incidence compared with other tumour types, but in the future, additional tumour types should be taken into account, including rare tumours that still suffer from the lack of standardised approaches. Since AI is based on the collection and analysis of large datasets of cases, however, the improvement in the treatment of rare neoplasms will likely represent a late achievement. Notably, if together considered, rare tumours are one of the most important category in precision oncology [11]. Thus, in our opinion, ongoing strategies of AI development cannot ignore this tumour group; although the potential benefits seem far away, it is already time to start collecting data on rare neoplasms.

One of the most promising expectancy for AI is the possibility to integrate different and composite data derived from multi-omics approaches to oncologic patients. The promising tools of AI could be the only able to manage the big amount of data from different types of analysis, including information derived from DNA and RNA sequencing. Along this line, the recent release of American College of Medical Genetics standards and guidelines for the interpretation of the sequence variants [12] has fostered a new wave of AI development, with innovative opportunities in precision oncology (https://www.businesswire.com/news/home/20190401005976/en/Fabric-Genomics-Announces-AI-based-ACMG-Classification-Solution-for-Genetic-Testing-with-Hereditary-Panels; last access 09/21/2021). In our opinion, however, the lack of ground-truth information derived from protected health- data repositories still represents a bottleneck in evaluating the accuracy of AI applications for clinical decision-making.

Overall considered, AI is providing a growing impact to all scientific branches, including oncology and its related fields, as highlighted in this study. For designing new development strategies with concrete impacts, the first steps are representing by knowing its historical background and understanding its current achievements. As here highlighted, AI is already entered into the oncologic clinical practice, but continuous and increasing efforts should be warranted to allow AI expressing its entire potential. In our opinion, the creation of multidisciplinary/integrative developmental views, the immediate comprehension of the importance of all neoplasms, including rare tumours and the continuous support for guaranteeing its growth represent in this time the most important challenges for finalising the AI-revolution in oncology.

Excerpt from:

Artificial intelligence in oncology: current applications and future perspectives | British Journal of Cancer - Nature.com

Computer Conservation: Lily Xu Uses Artificial Intelligence To Stop Poaching Around the World – SciTechDaily

By Harvard John A. Paulson School of Engineering and Applied SciencesNovember 28, 2021

Lily Xu. Credit: Eliza Grinnell/Harvard SEAS

Lily Xu knew from a young age how much the environment and conservation mattered to her.

By 9 years old, shed already decided to eat vegetarian because, as she put it, I didnt want to hurt animals.

Xu grew up believing her passions would always be separate from her professional interest in computer science. Then she became a graduate student in Milind Tambes Teamcore Lab, and everything changed.

Xu is now doing award-winning research into using machine learning and artificial intelligence to help conservation and anti-poaching efforts around the world. Her recent paper, Learning, Optimization, and Planning Under Uncertainty for Wildlife Conservation, won the 2021 INFORMS Doing Good with Good OR Student Paper Competition.

From our earliest conversations, it was crystal clear that Lily was very passionate about sustainability, conservation, and the environment, said Tambe, the Gordon McKay Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). This was also the reason our wavelengths matched and I went out of my way to recruit her and ensure she joined my group.

In the Teamcore Lab, Xu helped develop Protection Assistant for Wildlife Security (PAWS), an artificial intelligence system that interfaces with a database used by park rangers to record observations of illegal poaching and predict which areas are likely to be poaching hotspots. The system makes it easier for rangers to choose the best locations to patrol.

Lily Xu poses at the entrance to Srepok Wildlife Sanctuary in Cambodia. Credit: Lily Xu

In 2019, Xu and the Teamcore Lab partnered with the Srepok Wildlife Sanctuary in Cambodia to test the efficacy of PAWS. At the time, the sanctuary only had 72 rangers to patrol an area slightly larger than the state of Rhode Island.

Our work with Cambodia was the most intensive collaboration with a park that weve had, said Xu. We had several months of meetings, and our interactions with them and the feedback they were giving us about the process really shaped the design of our algorithms.

Xu played a lead role in implementing field tests of the PAWS program. Through Tambe, Xu and her lab mates, Srepoks rangers greatly increased the number of poachers snares they removed throughout the sanctuary.

Lily has led and taken PAWS from a small research concept to a globally impactful research effort leading to removal of thousands of lethal animal snares, saving endangered wildlife globally, said Tambe. Lily has led a global effort that has made the PAWS software available worldwide to hundreds of national parks. This is true global impact, aiming to save endangered wildlife around the world.

Lily Xu patrols Srepok Wildlife Sanctuary in Cambodia. Credit: Lily Xu

Xu has always loved nature, but didnt get to experience much of it while growing up in the Maryland suburbs of Washington, D.C. Once she got to Dartmouth College as an undergraduate in 2014, she finally got to immerse herself in the outdoors.

I went hiking and camping for the first time as part of my freshman orientation trip, just absolutely fell in love with it, and then spent as much time as I could outdoors, she said. That made me even more attuned to how precious the natural environment is, and how much I care about doing my part to preserve it.

She eventually began to help organize Dartmouths first-year trip and took on leadership roles with the schools sophomore trip and canoe club. Xu didnt want to just experience nature, she wanted others to care about it too.

Thats continued at Harvard, where shes mentored four students since the summer of 2020, and been part of several mentorship teams.

I care a lot about mentorship in all capacities, whether thats bringing people out of their comfort zone, encouraging them to explore the outdoors and realize that this is a place for them, Xu said. The outdoors community is traditionally wealthy and traditionally white. Im neither of those things, and I really want to encourage other people and show them that this can be their space too. Similarly, from a computer science standpoint, this is a field that is traditionally male-dominated, and especially in AI research, its traditionally people in the western world.

Xu has published multiple award-winning publications through her work on PAWS. A paper presented at the 35th Association for the Advancement of Artificial Intelligence, Dual-Mandate Patrols: Multi-Armed Bandits for Green Security, was named a Best Paper Award Runner-Up as a top-six paper out of nearly 1,700 accepted papers, while another publication, Enhancing Poaching Predictions for Under-Resourced Wildlife Conservation Parks Using Remote Sensing Imagery, won the Best Lightning Paper Award at the Machine Learning for Development Workshop at the 34th Conference on Neural Information Processing Systems in 2020.

Xu is working to address those disparities as a member of Mechanism Design for Social Good (MD4SG), a multi-school, multi-disciplinary research initiative that organizes working groups and colloquium series to address the needs of underserved and marginalized communities all over the world. Xu joined MD4SG in 2020 as co-organizer for the groups environmental working group, and this past March became a co-organizer for the entire organization.

I thought, Oh this sounds like a phenomenal opportunity, because I dont really know of a strong community of computational researchers who are working in environmental challenges, and I would love to help foster a community, Xu said. Our working group, for example, has really been able to bring in people from all around the world.

Shes fantastic to work with in all of these areas, said Bryan Wilder, PhD 21, a former Teamcore lab member and member of the MD4SG leadership team. She has the combination of being incredibly engaged and energetic and really making things happen, while also just being a kind person to work with.

For Xu, research is about more than just publishing its all about building relationships and fostering community engagement.

We are researchers that are not just trying to get your data sets, publish a paper and then just walk away, said Xu. We are here for the long run. We are committed. We want to achieve conservation results as much as we want to achieve academic publication.

View post:

Computer Conservation: Lily Xu Uses Artificial Intelligence To Stop Poaching Around the World - SciTechDaily

Artificial intelligence may not actually be the solution for stopping the spread of fake news – The Conversation CA

Disinformation has been used in warfare and military strategy over time. But it is undeniably being intensified by the use of smart technologies and social media. This is because these communication technologies provide a relatively low-cost, low-barrier way to disseminate information basically anywhere.

The million-dollar question then is: Can this technologically produced problem of scale and reach also be solved using technology?

Indeed, the continuous development of new technological solutions, such as artificial intelligence (AI), may provide part of the solution.

Technology companies and social media enterprises are working on the automatic detection of fake news through natural language processing, machine learning and network analysis. The idea is that an algorithm will identify information as fake news, and rank it lower to decrease the probability of users encountering it.

From a psychological perspective, repeated exposure to the same piece of information makes it likelier for someone to believe it. When AI detects disinformation and reduces the frequency of its circulation, this can break the cycle of reinforced information consumption patterns.

However, AI detection still remains unreliable. First, current detection is based on the assessment of text (content) and its social network to determine its credibility. Despite determining the origin of the sources and the dissemination pattern of fake news, the fundamental problem lies within how AI verifies the actual nature of the content.

Theoretically speaking, if the amount of training data is sufficient, the AI-backed classification model would be able to interpret whether an article contains fake news or not. Yet the reality is that making such distinctions requires prior political, cultural and social knowledge, or common sense, which natural language processing algorithms still lack.

Read more: An AI expert explains why it's hard to give computers something you take for granted: Common sense

In addition, fake news can be highly nuanced when it is deliberately altered to appear as real news but containing false or manipulative information, as a pre-print study shows.

Classification analysis is also heavily influenced by the theme AI often differentiates topics, rather than genuinely the content of the issue to determine its authenticity. For example, articles related to COVID-19 are more likely to be labelled as fake news than other topics.

One solution would be to employ people to work alongside AI to verify the authenticity of information. For instance, in 2018, the Lithuanian defence ministry developed an AI program that flags disinformation within two minutes of its publication and sends those reports to human specialists for further analysis.

A similar approach could be taken in Canada by establishing a national special unit or department to combat disinformation, or supporting think tanks, universities and other third parties to research AI solutions for fake news.

Controlling the spread of fake news may, in some instances, be considered censorship and a threat to freedom of speech and expression. Even a human may have a hard time judging whether information is fake or not. And so perhaps the bigger question is: Who and what determine the definition of fake news? How do we ensure that AI filters will not drag us into the false positive trap, and incorrectly label information as fake because of its associated data?

An AI system for identifying fake news may have sinister applications. Authoritarian governments, for example, may use AI as an excuse to justify the removal of any articles or to prosecute individuals not in favour of the authorities. And so, any deployment of AI and any relevant laws or measurements that emerge from its application will require a transparent system with a third party to monitor it.

Future challenges remain as disinformation especially when associated with foreign intervention is an ongoing issue. An algorithm invented today may not be able to detect future fake news.

For example, deep fakes which are highly realistic and difficult-to-detect digital manipulation of audio or video are likely to play a bigger role in future information warfare. And disinformation spread via messaging apps such as WhatsApp and Signal are becoming more difficult to track and intercept because of end-to-end encryption.

A recent study showed that 50 per cent of the Canadian respondents received fake news through private messaging apps regularly. Regulating this would require striking a balance between privacy, individual security and the clampdown of disinformation.

While it is definitely worth allocating resources to combating disinformation using AI, caution and transparency are necessary given the potential ramifications. New technological solutions, unfortunately, may not be a silver bullet.

Read more from the original source:

Artificial intelligence may not actually be the solution for stopping the spread of fake news - The Conversation CA

Seattle Researchers Claim to Have Built Artificial Intelligence That Has Morality – The Great Courses Daily News

By Jonny Lupsha, Current Events WriterDue to computational programming, artificial intelligence may seem like it understands issues and has a sense of moralitybut philosophically and scientifically is that possible? Photo By PopTika / Shutterstock

Many questions have arisen since the advent of artificial intelligence (AI), even in its most primitive incarnations. One philosophical point is whether AI can actually reason and make ethical decisions in an abstract sense, rather than one deduced by coding and computation.

For example, if you program into an AI that intentionally harming a living thing without provocation is bad and not to be done, will the AI understand the idea of bad, or why doing so is bad? Or will it abstain from the action without knowing why?

Researchers from a Seattle lab claim to have developed an AI machine with its own sense of morality, though the answers it gives only lead to more questions. Are its morals only a reflection of those of its creators, or did it create its own sense of right and wrong? If so, how?

Before his unfortunate passing, Dr. Daniel N. Robinson, a member of the philosophy faculty at Oxford University, explained in his video series Great Ideas of Psychology that the strong AI thesis may be asking relevant questions to solve the mystery.

Imagine, Dr. Robinson said, if someone built a general program to function that way, so the program could provide expert judgments on cardiovascular disease, constitutional law, trade agreements, and so on. If the programmer could then have the program perform these tasks in a way indistinguishable from human experts, the position of the strong AI thesis is that its programmers have conferred on it an expert intelligence.

The strong AI thesis suggests that unspecified computational processes can exist which then would sufficiently constitute intentionality due to their existence. Intentionality means making a deliberate, conscious decision, which in turn implies reasoning and a sense of values. However, is that really possible?

The incompleteness theoremGdels theoremsays that any formal system is incomplete in that it will be based on, it will require, it will depend on a theorem or axiom, the validity of which must be established outside the system itself, Dr. Robinson said. Gdels argument is a formal argument and it is true.

What do we say about any kind of computational device that would qualify as intelligent in the sense in which the artificial intelligence community talks about artificial intelligence devices?

Kurt Gdel developed this theorem with the apparent exception for human intelligence that liberates it from the limitations of his own theorem. In other words, Gdel believed there must be something about human rationality and intelligence that cant be captured by a formal system with the power to generate, say, an arithmetic.

If you accept that as a general proposition, then what you would have to say is that human intelligence cannot be mimicked or modeled on purely computational grounds, Dr. Robinson said. So, one argument against the strong AI thesis is that its not a matter of time before it succeeds and redeems its promises. It will never succeed and redeem its promises for the simple reason that the intelligence it seeks to simulate, or model, or duplicate, is, in fact, not a computationally-based [] intelligence.

Should the mystery ever be solved, we may finally be able to answer Philip K. Dicks question: Do androids dream of electric sheep?

Edited by Angela Shoemaker, The Great Courses Daily

Read the rest here:

Seattle Researchers Claim to Have Built Artificial Intelligence That Has Morality - The Great Courses Daily News

6 positive AI visions for the future of work – World Economic Forum

Current trends in AI are nothing if not remarkable. Day after day, we hear stories about systems and machines taking on tasks that, until very recently, we saw as the exclusive and permanent preserve of humankind: making medical diagnoses, drafting legal documents, designing buildings, and even composing music.

Our concern here, though, is with something even more striking: the prospect of high-level machine intelligence systems that outperform human beings at essentially every task. This is not science fiction. In a recent survey the median estimate among leading computer scientists reported a 50% chance that this technology would arrive within 45 years.

Importantly, that survey also revealed considerable disagreement. Some see high-level machine intelligence arriving much more quickly, others far more slowly, if at all. Such differences of opinion abound in the recent literature on the future of AI, from popular commentary to more expert analysis.

Yet despite these conflicting views, one thing is clear: if we think this kind of outcome might be possible, then it ought to demand our attention. Continued progress in these technologies could have extraordinarily disruptive effects it would exacerbate recent trends in inequality, undermine work as a force for social integration, and weaken a source of purpose and fulfilment for many people.

In April 2020, an ambitious initiative called Positive AI Economic Futures was launched by Stuart Russell and Charles-Edouard Boue, both members of the World Economic Forums Global AI Council (GAIC). In a series of workshops and interviews, over 150 experts from a wide variety of backgrounds gathered virtually to discuss these challenges, as well as possible positive Artificial Intelligence visions and their implications for policymakers.

Those included Madeline Ashby (science fiction author and expert in strategic foresight), Ken Liu (Hugo Award-winning science fiction and fantasy author), and economists Daron Acemoglu (MIT) and Anna Salomons (Utrecht), among many others. What follows is a summary of these conversations, developed in the Forum's report Positive AI Economic Futures.

Participants were divided on this question. One camp thought that, freed from the shackles of traditional work, humans could use their new freedom to engage in exploration, self-improvement, volunteering, or whatever else they find satisfying. Proponents of this view usually supported some form of universal basic income (UBI), while acknowledging that our current system of education hardly prepares people to fashion their own lives, free of any economic constraints.

The second camp in our workshops and interviews believed the opposite: traditional work might still be essential. To them, UBI is an admission of failure it assumes that most people will have nothing of economic value to contribute to society. They can be fed, housed, and entertained mostly by machines but otherwise left to their own devices.

People will be engaged in supplying interpersonal services that can be provided or which we prefer to be provided only by humans. These include therapy, tutoring, life coaching, and community-building. That is, if we can no longer supply routine physical labour and routine mental labour, we can still supply our humanity. For these kinds of jobs to generate real value, we will need to be much better at being human an area where our education system and scientific research base is notoriously weak.

So, whether we think that the end of traditional work would be a good thing or a bad thing, it seems that we need a radical redirection of education and science to equip individuals to live fulfilling lives or to support an economy based largely on high-value-added interpersonal services. We also need to ensure that the economic gains born of AI-enabled automation will be fairly distributed in society.

One of the greatest obstacles to action is that, at present, there is no consensus on what future we should target, perhaps because there is hardly any conversation about what might be desirable. This lack of vision is a problem because, if high-level machine intelligence does arrive, we could quickly find ourselves overwhelmed by unprecedented technological change and implacable economic forces. This would be a vast opportunity squandered.

For this reason, the workshop attendees and interview participants, from science-fiction writers to economists and AI experts, attempted to articulate positive visions of a future where Artificial Intelligence can do most of what we currently call work.

These scenarios represent possible trajectories for humanity. None of them, though, is unambiguously achievable or desirable. And while there are elements of important agreement and consensus among the visions, there are often revealing clashes, too.

The economic benefits of technological progress are widely shared around the world. The global economy is 10 times larger because AI has massively boosted productivity. Humans can do more and achieve more by sharing this prosperity. This vision could be pursued by adopting various interventions, from introducing a global tax regime to improving insurance against unemployment.

Large companies focus on developing AI that benefits humanity, and they do so without holding excessive economic or political power. This could be pursued by changing corporate ownership structures and updating antitrust policies.

Human creativity and hands-on support give people time to find new roles. People adapt to technological change and find work in newly created professions. Policies would focus on improving educational and retraining opportunities, as well as strengthening social safety nets for those who would otherwise be worse off due to automation.

The World Economic Forums Centre for the Fourth Industrial Revolution, in partnership with the UK government, has developed guidelines for more ethical and efficient government procurement of artificial intelligence (AI) technology. Governments across Europe, Latin America and the Middle East are piloting these guidelines to improve their AI procurement processes.

Our guidelines not only serve as a handy reference tool for governments looking to adopt AI technology, but also set baseline standards for effective, responsible public procurement and deployment of AI standards that can be eventually adopted by industries.

We invite organizations that are interested in the future of AI and machine learning to get involved in this initiative. Read more about our impact.

Society decides against excessive automation. Business leaders, computer scientists, and policymakers choose to develop technologies that increase rather than decrease the demand for workers. Incentives to develop human-centric AI would be strengthened and automation taxed where necessary.

New jobs are more fulfilling than those that came before. Machines handle unsafe and boring tasks, while humans move into more productive, fulfilling, and flexible jobs with greater human interaction. Policies to achieve this include strengthening labour unions and increasing worker involvement on corporate boards.

In a world with less need to work and basic needs met by UBI, well-being increasingly comes from meaningful unpaid activities. People can engage in exploration, self-improvement, volunteering or whatever else they find satisfying. Greater social engagement would be supported.

The intention is that this report starts a broader discussion about what sort of future we want and the challenges that will have to be confronted to achieve it. If technological progress continues its relentless advance, the world will look very different for our children and grandchildren. Far more debate, research, and policy engagement are needed on these questions they are now too important for us to ignore.

Written by

Stuart Russell, Professor of Computer Science and Director of the Center for Human-Compatible AI, University of California, Berkeley

Daniel Susskind, Fellow in Economics, Oxford University, and Visiting Professor, Kings College, London

The views expressed in this article are those of the author alone and not the World Economic Forum.

Read the original here:

6 positive AI visions for the future of work - World Economic Forum

Defining what’s ethical in artificial intelligence needs input from Africans – The Conversation CA

Artificial intelligence (AI) was once the stuff of science fiction. But its becoming widespread. It is used in mobile phone technology and motor vehicles. It powers tools for agriculture and healthcare.

But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Googles Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a 2018 paper Gebru and another researcher, Joy Buolamwini, had showed how facial recognition software was less accurate in identifying women and people of colour than white men. Biases in training data can have far-reaching and unintended effects.

There is already a substantial body of research about ethics in AI. This highlights the importance of principles to ensure technologies do not simply worsen biases or even introduce new social harms. As the UNESCO draft recommendation on the ethics of AI states:

We need international and national policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole.

In recent years, many frameworks and guidelines have been created that identify objectives and priorities for ethical AI.

This is certainly a step in the right direction. But its also critical to look beyond technical solutions when addressing issues of bias or inclusivity. Biases can enter at the level of who frames the objectives and balances the priorities.

In a recent paper, we argue that inclusivity and diversity also need to be at the level of identifying values and defining frameworks of what counts as ethical AI in the first place. This is especially pertinent when considering the growth of AI research and machine learning across the African continent.

Research and development of AI and machine learning technologies is growing in African countries. Programmes such as Data Science Africa, Data Science Nigeria, and the Deep Learning Indaba with its satellite IndabaX events, which have so far been held in 27 different African countries, illustrate the interest and human investment in the fields.

The potential of AI and related technologies to promote opportunities for growth, development and democratisation in Africa is a key driver of this research.

Yet very few African voices have so far been involved in the international ethical frameworks that aim to guide the research. This might not be a problem if the principles and values in those frameworks have universal application. But its not clear that they do.

For instance, the European AI4People framework offers a synthesis of six other ethical frameworks. It identifies respect for autonomy as one of its key principles. This principle has been criticised within the applied ethical field of bioethics. It is seen as failing to do justice to the communitarian values common across Africa. These focus less on the individual and more on community, even requiring that exceptions are made to upholding such a principle to allow for effective interventions.

Challenges like these or even acknowledgement that there could be such challenges are largely absent from the discussions and frameworks for ethical AI.

Just like training data can entrench existing inequalities and injustices, so can failing to recognise the possibility of diverse sets of values that can vary across social, cultural and political contexts.

In addition, failing to take into account social, cultural and political contexts can mean that even a seemingly perfect ethical technical solution can be ineffective or misguided once implemented.

For machine learning to be effective at making useful predictions, any learning system needs access to training data. This involves samples of the data of interest: inputs in the form of multiple features or measurements, and outputs which are the labels scientists want to predict. In most cases, both these features and labels require human knowledge of the problem. But a failure to correctly account for the local context could result in underperforming systems.

For example, mobile phone call records have been used to estimate population sizes before and after disasters. However, vulnerable populations are less likely to have access to mobile devices. So, this kind of approach could yield results that arent useful.

Similarly, computer vision technologies for identifying different kinds of structures in an area will likely underperform where different construction materials are used. In both of these cases, as we and other colleagues discuss in another recent paper, not accounting for regional differences may have profound effects on anything from the delivery of disaster aid, to the performance of autonomous systems.

AI technologies must not simply worsen or incorporate the problematic aspects of current human societies.

Being sensitive to and inclusive of different contexts is vital for designing effective technical solutions. It is equally important not to assume that values are universal. Those developing AI need to start including people of different backgrounds: not just in the technical aspects of designing data sets and the like but also in defining the values that can be called upon to frame and set objectives and priorities.

See original here:

Defining what's ethical in artificial intelligence needs input from Africans - The Conversation CA

Global AI (Artificial Intelligence) Market Report 2021: Ethical AI Practices and Advisory will be Incorporated in AI Technology Growth Strategy to…

DUBLIN, Nov. 25, 2021 /PRNewswire/ -- The "Future Growth Potential of the Global AI Market" report has been added to ResearchAndMarkets.com's offering.

Artificial intelligence (AI) is transforming organizations, industries, and the technology landscape. The world is moving to the increased adoption of AI-powered smart applications/systems, and this trend will increase exponentially over the next few years. AI technologies are maturing, and the need to leverage their capabilities is becoming a CXO priority.

As businesses make AI part of their core strategy, the transformation of business functions, measures, and controls to ensure ethical best practices will gain importance. The implementation and the governance of ethical AI practices will become a priority and a board-level concern.

The deployment of AI solutions that are ethical (from a regulatory and a legal standpoint), transparent, and without bias will become essential. As governments and industry bodies across the world articulate AI regulations, AI companies must establish their ethical frameworks until roadmaps are clearly defined.

The operationalization of ethical AI principles is challenging for enterprises, given the large volumes of user-centric data that need to be processed, the breadth of use-cases, the regulatory variations in operating markets, and the diverse stakeholder priorities.

This also opens up opportunities for technology vendors and service providers. To effectively partner with enterprises and monetize these opportunities, ICT providers need to assess potential areas impacting AI ethics and evaluate opportunities across the people-process-technology spectrum.

Forward-thinking technology and service companies, including large ICT providers and start-ups, are working with enterprises and industry stakeholders to leverage potential opportunities. Ethical challenges will continue to be discovered and remediated to create sustained growth in potential advisory services.

As enterprises define goals, values, strategic outcomes, and key performance metrics, the time is right for technology companies to strategically partner with enterprises in the detection and the mitigation of ethical AI concerns.

Key Topics Covered:

1. Strategic Imperatives

2. Growth Environment

3. Growth Opportunity Analysis

4. Growth Opportunity Universe

For more information about this report visit https://www.researchandmarkets.com/r/l7isqw

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1904 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Go here to see the original:

Global AI (Artificial Intelligence) Market Report 2021: Ethical AI Practices and Advisory will be Incorporated in AI Technology Growth Strategy to...