Page 81«..1020..80818283..90100..»

Category Archives: Ai

Companies Using AI in Hiring Will Soon Face New Regulations – Triple Pundit

Posted: December 15, 2021 at 9:58 am

Despite studies concluding that artificial intelligence (AI) often contributes to bias against women and people of color in the workplace, many companies continue to use software that incorporates AI as part of their hiring practices. On one hand, its understandable: Going through resumes submitted online is a slog. The same goes for scheduling a long week of video interviews. Nevertheless, concerns over AI led to a dozen leading brands to announce last week that they were committed to stop artificial intelligence and algorithmic bias from having any impacts on their hiring practices.

But such action may be too little, too late for local and state governments. New York Citys government recently passed a law that will require any company selling human resources software using AI to complete third-party audits in order to verify that such technology doesnt show bias against any gender, ethnicity or race. In addition, companies using such software would be required to inform job applicants that they use AI as part of any hiring process, as well as disclose whether any personal information helped its HR software make any decisions.

Editor's note: Be sure tosubscribeto our Brands Taking Stands newsletter, which comes out every Wednesday.

Its easy to dismiss New York City as a political outlier, but states such as Illinois and Maryland have also enacted similar legislation. Both states laws attempt to tackle the problem of using AI to analyze video interviews of job applicants. While the technologies are relatively new, the ongoing problem isnt: At a minimum, legislation considered across the U.S. seek both transparency and the consent of prospective employees before such technology is used to assess their fit within a company.

And in a move that could have implications across the country, The U.S. Equal Employment Opportunity Commission (EEOC) launched a working group in October in order to ensure that the use of such hiring tools comply with federal hiring and civil rights laws that the agency is tasked to enforce.

This is a big deal because it could provide people some transparency when it comes to learning why they werent hired, wrote Fortunes Ellen McGirt in her most recent RaceAhead newsletter. For people of color, knowing whether AI was used to determine if they were a good fit for the job could be revelatory.

Image credit via Adobe Stock

Go here to read the rest:

Companies Using AI in Hiring Will Soon Face New Regulations - Triple Pundit

Posted in Ai | Comments Off on Companies Using AI in Hiring Will Soon Face New Regulations – Triple Pundit

Enterprise AI success starts by curbing expectations – CIO Dive

Posted: at 9:58 am

NEW YORK In enterprise AI, there's a gap between what's possible and what's expected a disconnect that is hurting implementation.

The technology's appeal has led business leaders to support adoption, with more than half of executives moving to the scaling phase of the technology,according to Deloitte. As more businesses bring AI software into their expanding technology stacks, global revenue for the space will reach $62.5 billion in 2022, a year-over-year increase of 21.3%,Gartner projections show.

While the ultimate goal for AI adoption is a more efficient organization, sometimes the gap between the possible and the achievable is too big, according to Fabio Caversan, digital business and innovation VP at Stefanini US, speaking last week at the AI Summit in New York.

To deploy the right AI tool sets and technologies in the enterprise "you have to handle the expectations," said Caversan. Sometimes, the technology is ready to deliver, but the infrastructure and quality of data aren't quite there yet, according to Caversan.

The top barrier to AI adoption in the enterprise is access to talent,according to an IBM survey. But closely behind are the complexities of data and silos as well as a lack of tools for developing AI models.

One strategy to win the expectations game is to begin with simple steps, using AI to expand the capabilities of the workforce.

"Start with augmentation and then, if you're comfortable and your process allows, flip slowly to automation," said Caversan.

Enterprises can also miss the mark with their AI implementations by losing focus on their core business, according to Atalia Horenshtien, customer facing data scientist at DataRobot. It's up to business leaders to align the implementation of AI with their intent of becoming leaders in their space, said Horenshtien.

During the pandemic, case studies for AI technology deployment centered on applications to address customer pain points, from fraud detection to rising call center volumes.

The talent component is also critical to achieve enterprise AI success, said Saira Kazmi, senior director, enterprise data engineering strategy and AI at CVS Health.

"When a business decides to use AI or algorithms for a need, the question is not just about the technology but about change management," said Kazmi. It's up to leaders to decide if and when to move the technology from experimentation to production.

"It all goes back to understanding the problem that you're trying to solve," Kazmi said.

More here:

Enterprise AI success starts by curbing expectations - CIO Dive

Posted in Ai | Comments Off on Enterprise AI success starts by curbing expectations – CIO Dive

We put the titles of JBC’s most-read articles into an AI art generator – American Society for Biochemistry and Molecular Biology

Posted: at 9:58 am

In the past week or two, you may have noticed a spate of whimsical images with scientific captions proliferating on social media like some kind of unannounced journal cover contest. Perhaps youve posted one yourself.

Researchers from many disciplines are putting the titles of their work abstract phrases dense with academic jargon into an AI art generator, and sharing the results. The trend is powered by an app called Dream from Canadian software developer Wombo, which also publishes a deepfake video app that makes photos appear to lip-synch.

Dream solicits text inputs, and using the free-associating power of artificial intelligence - plus the conventions of an art style the user chooses custom-generates an image. Usually strange but appealing, the images tend to contain at least a few elements that you can trace back to the elements in the prompt. Wombo isn't saying much about what powers the software, but machine learning experts have speculated publicly that it may use a neural network technique called VQGAN + CLIP. The technique uses two neural networks, one to conjure images based on a text input, and the second to assess and tweak the outputs. (You can read an explanation of how it works here.)

Since it launched in late November, Dream has seen a wave of interest, partly on the strength of its sheer weirdness. ASBMB Today staff wondered how it would handle molecular biology and biochemistry. Here are AI-generated artworks based on four of JBCs top papers of the year.

Dream by Wombo

We think: The festive style plays nicely with the weird blend of ball-and-stick and protein backbone model the algorithm cooked up. We think chemistry, electron and flavoprotein all helped set the tone.

Dream by Wombo

We think: The branch-like structures and webbing evoke both glycan tree images, and neutrophil extracellular traps. We decided to spare the AI the difficulty of rendering Asn-355.

Dream by Wombo

We think: Genomewide CRISPR screens. Artwork generated by computers. Is there any doubt we live in the future? This piece used a steampunk style, and we love the cloud-double helix hybrid the algorithm came up with.

Dream by Wombo

We think: This was a tough onewith a lot of abstract words! We suggested the mystical art style, thinking it would help with the numinous vocabulary. The AI thought about it for quite a while, but it never returned an image output. Eventually,we felt bad about our carbon emissions and stopped the app.

Originally posted here:

We put the titles of JBC's most-read articles into an AI art generator - American Society for Biochemistry and Molecular Biology

Posted in Ai | Comments Off on We put the titles of JBC’s most-read articles into an AI art generator – American Society for Biochemistry and Molecular Biology

Using Cloud and AI to Transform Service Delivery – GovTech

Posted: at 9:58 am

Data-driven technologies delivered through the cloud help governments scale innovation

Cloud technologies, data science and artificial intelligence (AI) are enabling government leaders to implement new ways of operating and better serving the public.

The Google Cloud Government and Education Summit, an online event held in November 2021, explored many real-world examples of how governments today are using cloud tools to rapidly transform their public service initiatives in the digital world. The summit convened industry experts and leaders from government and education across the world.

Public servants are able to use technology to help their staff and teams accomplish more on a greater scale than ever before, says Mike Daniels, vice president of global public sector at Google Cloud. We're seeing services and processes get more efficient with scale at less cost and remarkable speed.

Dozens of on-demand sessions from the event are available at no cost.

In the summits keynote session, Google Cloud CEO Thomas Kurian hosted a panel of public sector leaders and Google executives for an overview of how cloud capabilities and technologies are instrumental to the core services and functional needs of government and education organizations globally. A few highlights:

State government contact centers in New York received a crush of calls when COVID vaccines first became available to the public, vastly exceeding the capacity of staff to respond. New Yorkers complained on social media that they couldnt get their questions answered.

At the end of the day, we were not making a significant dent in the number of calls that needed to be answered, said Rajiv Rao, chief technology officer for the state of New York, in a session titled AI Enabling Real-Time, Virtual Vaccine Information. We very quickly realized that was not going to be sustainable at all.

Raos team quickly ramped up capacity by implementing a cloud-based virtual agent solution that uses AI to answer an array of common vaccine questions. But the team also understood that automation needed a human connection.

Rao said their goal was to deliver empathy in the first minute, making sure the automated response system understands the callers real-life problems. By the time the technology was fully implemented, almost 50 percent of contact center calls were handled by the AI system ensuring residents got the information they needed while easing the strain on contact center staff.

It just goes to show you that technology can solve the problem, Rao said.

Getting People Back to Work

The pandemic caused unemployment to spike in communities across the nation. In a session titled Integrated Experiences Getting People Back to Work, agency executives from Rhode Island and Ohio shared how cloud technologies helped them serve job seekers across their states.

Sarah Blusiewicz, chief operating officer for Rhode Islands Department of Labor and Training, launched a virtual career center that uses AI tools to help residents find job opportunities that match their skills and experience. Google Clouds video collaboration solution, a part of Google Workspace, also proved crucial to the initiative.

[The virtual career center] really allowed us to have access into digital meetings and digital spaces, and to continue to serve people through the pandemic as they looked for work, Blusiewicz said in the session.

Kevin Holt, director of Hamilton County (Ohio) Job and Family Services, had an infusion of federal relief funding for rent and utility assistance in his community. He needed every dollar allocated according to strict laws and regulations as quickly as possible.

Holt said cloud tools helped his agency roll out a new system for establishing peoples eligibility and dispersing money to them safely, legally, and with accountability and good communication all in the space of three months.

It is a nearly unprecedented process in my 30 years of experience, Holt recalled in the session. It didnt take months of policy meetings or years to implement. We just did it and we did it quickly with the right partners.

Read more about how Google Cloud helped Hamilton County provide residents with rent assistance and relief faster.

Driving Better Public Health Decisions

In a session titled Sentiment Analysis Powers Vaccine Distribution and Opioid Analytics, experts from California and Oklahoma described how they used AI tools hosted in the cloud to better understand peoples actions and encourage better health decisions.

In California, agency leaders used sentiment analysis to correlate data from multiple sources, painting a statistically valid picture of public sentiment on vaccines and other COVID concerns. This was a critical component of making sentiment data make sense and making it so that it can inform actual decision-making and policy, said J.P. Petrucione, deputy director for communications and insights for the California Office of Digital Innovation.

In Oklahoma, sentiment analysis drills down to the ZIP-code level to understand the motivations of people who need treatment for drug abuse or addiction. Being able to have real-time information from data, and to be able to make decisions quicker, has been critical in our success and in addressing the opioid epidemic, said Heath Hayes, chief communications officer for the Oklahoma Department of Mental Health and Substance Abuse Services.

Data can reveal the impact of outreach and engagement, as well as the results of efforts to target specific demographics. Instead of doing an approach where we just put it all on the table and hope that it works, we're able to be more targeted with our specific programs that are evidence-based and reaching the people that we know need it the most, Hayes said.

The Key to Success

Whats the foundation of success with cloud technologies and AI? Rhode Islands Blusiewicz said success hinges on these core components: leadership, prioritization and speed. Top officials need to set priorities based on which actions can make the biggest impact in the shortest amount of time and then give core stakeholders the authority to make plans and execute them.

Gerald Mullally, director in the UK Cabinet Office, recommended an incremental approach to transformation. Mullally led a team that used cloud technologies to perform sentiment analysis aimed at overcoming public reluctance to try the new COVID vaccines.

Start small, iterate quickly and do so with external partners that have the specialist expertise that you need, he said in the summits keynote session. Resolve, he added, is essential.

Hold true to that vision that you have and persist, persist, persist, Mullally said. The outcomes that you eventually achieve and the importance of those outcomes to the citizens that we all serve will be well worth the effort.

Get inspired to change the way you work, empower others and make your organization a more welcoming workplace for everyone. Register and watch more sessions from the Google Cloud Government and Education Summit to see how public sector leaders are meeting the challenges of the pandemic and improving technology to support their organizations.

The rest is here:

Using Cloud and AI to Transform Service Delivery - GovTech

Posted in Ai | Comments Off on Using Cloud and AI to Transform Service Delivery – GovTech

Solving the data conundrum with HPC and AI – ITProPortal

Posted: at 9:58 am

Supercomputing has come a long way since its beginnings in the 1960s. Initially, many supercomputers were based on mainframes, however, their cost and complexity were significant barriers to entry for many institutions. The idea of utilizing multiple low-cost PCs over a network to provide a cost-effective form of parallel computing led research institutions along the path of high-performance computing (HPC) clusters starting with "Beowulf clusters in the 90s.

Beowulf clusters are very much the predecessors to todays HPC clusters. The fundamentals of the Beowulf architecture are still relevant to modern-day HPC deployments; however, multiple desktop PCs have been replaced with purpose-built, high-density server platforms. Networking has significantly improved, with High Bandwidth/Low Latency InfiniBand (or, as a nod to the past, increasingly Ethernet) and high-performance parallel filesystems such as SpectrumScale, Lustre and BeeGFS have been developed to allow the storage to keep up with the compute. The development of excellent, often open-source, tools for managing high-performance distributed computing has also made adoption a lot easier.

More recently, we have witnessed the advancement of HPC from the original, CPU-based clusters to systems that do the bulk of their processing on Graphic Processing Units (GPUs), resulting in the growth of GPU accelerated computing.

While HPC was scaling up with more compute resource, the data was growing at a far faster pace. Since the outset of 2010, there has been a huge explosion in unstructured data from sources like webchats, cameras, sensors, video communications and so on. This has presented big data challenges for storage, processing, and transfer. Newer technology paradigms such as big data, parallel computing, cloud computing, Internet of Things (IoT) and artificial intelligence (AI) came into the mainstream to cope with the problems caused by the data onslaught.

What these paradigms all have in common is that they are capable of being parallelized to a high degree. HPCs GPU parallel computing has been a real game-changer for AI as parallel computing can process all this data, in a short amount of time using GPUs. As workloads have grown, so too have GPU parallel computing and AI machine learning. Image analysis is a good example of how the power of GPU computing can support an AI project. With one GPU it would take 72 hours to process an imaging deep learning model, but it only takes 20 minutes to run the same AI model on an HPC cluster with 64 GPUs.

Beowulf is still relevant to AI workloads. Storage, networking, and processing are important to make AI projects work at scale, this is when AI can make use of the large-scale, parallel environments that HPC infrastructure (with GPUs) provides to help process workloads quickly. Training an AI model takes more far more time than testing one. The importance of coupling AI with HPC is that it significantly speeds up the training stage and boosts the accuracy and reliability of AI models, whilst keeping the training time to a minimum.

The right software is needed to support the HPC and AI combination. There are traditional products and applications that are being used to run AI workloads from within HPC environments, as many share the same requirements for aggregating large pools of resources and managing them. However, everything from the underlying hardware, the schedulers used, Message Passing Interface (MPI) and even to how software is packaged up is beginning to change towards more flexible models, and a rize in hybrid environments is a trend that we expect to see continue.

As traditional use cases for HPC applications are so well established, changes often happen relatively slowly. However, the updates for many HPC applications are only necessary every 6 to 12 months. On the other hand, AI development is happening so fast, updates and new applications, tools and libraries are being released roughly daily.

If you employed the same update strategies to manage your AI as you do for your HPC platforms, you would get left behind. That is why a solution like NVIDIAs DGX containerized platform allows you to quickly and easily keep up to date with rapid developments from NVIDIA GPU CLOUD (NGC), an online database of AI and HPC tools encapsulated in easy to consume containers.

It is becoming standard practice within the HPC community to use a containerized platform for managing instances that are beneficial for AI deployment. Containerization has accelerated support for AI workloads on HPC clusters.

AI models can be used to predict the outcome of a simulation without having to run the full, resource-intensive, simulation. By using an AI model in this way input variables/design points of interest can be narrowed down to a candidate list quickly and at much lower cost. These candidate variables can be run through the known simulation to verify the AI models prediction.

Quantum Molecular Simulations (QMS), Chip Design and Drug Discovery are areas this technique is increasingly being applied, IBM also recently launched a product that does exactly this known as IBM Bayesian Optimization Accelerator (BOA).

Start with a few simple questions; How big is my problem? How fast do I want my results back? How much data do I have to process? How many users are sharing the resource?

HPC techniques will help the management of an AI project if the existing dataset is substantial, or if contention issues are being experienced on the infrastructure from having multiple users. If you are at a point where you need to put four GPUs in a workstation and this is becoming a problem by causing a bottleneck, you need to consult with an HPC integrator, with experience in scaling up infrastructure for these types of workloads.

Some organizations might be running AI workloads on a large machine or multiple machines with GPUs and your AI infrastructure might look more like HPC infrastructure than you realize. There are HPC techniques, software and other aspects that can really help to manage that infrastructure. The infrastructure looks quite similar, but there are some clever ways of installing and managing it specifically geared towards AI modeling.

Storage is very often overlooked when organizations are building infrastructure for AI workloads, and you may not be getting the full ROI on your AI infrastructure if your compute is waiting for your storage to be freed up. It is important to seek the best advice for sizing and deploying the right storage solution for your cluster.

Big data doesnt necessarily need to be that big, it is just when it reaches that point when it becomes unmanageable for an organization. When you cant get out of it what you want, then it becomes too big for you. HPC can provide the compute power to deal with the large amounts of data in AI workloads.

It is an exciting time for both HPC and AI, as we are seeing incremental adaptation by both technologies. The challenges are getting bigger every day, with newer and more distinct problems which need faster solutions. For example, countering cyber-attacks, discovering new vaccines, detecting enemy missiles and so on.

It will be interesting to see what happens next in terms of inclusion of 100% containerized environments onto HPC clusters, and technologies such as Singularity and Kubernetes environments.

Schedulers today initiate jobs and wait until they finish which may not be an ideal scenario for AI environments. More recently, newer schedulers monitor the real-time performance and execute jobs based on priority and runtime and will be able to work alongside containerization technologies and environments such as Kubernetes to orchestrate the resource needed.

Storage will become increasingly important to support large deployments, as vast volumes of data need to be stored, classified, labeled, cleansed, and moved around quickly. Infrastructure such as flash storage and networking become vital to your project, alongside storage software that can scale with demand.

Both HPC and AI will continue to have an impact on both organizations and each other and their symbiotic relationship will only grow stronger as both traditional HPC users and AI infrastructure modelers realize the full potential of each other.

Vibin Vijay, AI Product Specialist, OCF

Read more:

Solving the data conundrum with HPC and AI - ITProPortal

Posted in Ai | Comments Off on Solving the data conundrum with HPC and AI – ITProPortal

Reddit-trained AI told researchers that AI can never be ethical – Mashable

Posted: at 9:58 am

An artificial intelligence warning AI researchers about the dangers of AI sounds like the setup of a delightful B movie, but truth is often stranger than fiction.

A professor and a fellow at the University of Oxford came face to face with that reality when they invited an AI to participate in a debate at the Oxford Union on, you guessed it, the ethics of AI. Specifically, as Dr. Alex Connock and Professor Andrew Stephen explain in the Conversation, the prompt was "This house believes that AI will never be ethical." The AI, it seems, agreed.

"AI will never be ethical," argued the Megatron-Turing Natural Language Generation model, which was notably trained on Wikipedia, Reddit, and millions of English-language news articles published between 2016 and 2019. "It is a tool, and like any tool, it is used for good and bad."

Which, OK. A potentially nuanced point from the machine. But the AI didn't stop there.

"In the end, I believe that the only way to avoid an AI arms race is to have no AI at all," continued the model. "This will be the ultimate defence against AI."

So what should we make of this apparent warning from the silicon realm? Thankfully, not too much. That's because the AI also argued the counterpoint: "AI will be ethical."

"When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings," it continued.

The machines, it would appear, aren't ready to take over quite yet.

Related Video: AI musicians are coming soon to your Spotify playlists

Read the rest here:

Reddit-trained AI told researchers that AI can never be ethical - Mashable

Posted in Ai | Comments Off on Reddit-trained AI told researchers that AI can never be ethical – Mashable

AI Survey: Health Care Organizations Continue to Adopt Artificial Intelligence to Help Achieve Better, More Equitable and Affordable Patient Outcomes…

Posted: at 9:58 am

EDEN PRAIRIE, Minn., December 15, 2021--(BUSINESS WIRE)--Health care executives increasingly believe in the power of artificial intelligence to help improve patient outcomes, support cost savings in the health system and promote health equity, according to a new survey of 500 senior health care executives from leading hospitals, health plans, life sciences companies and employers.

The fourth annual Optum Survey on Artificial Intelligence (AI) in Health Care found 96% of respondents believe AI plays an important role in their effort to reach health equity goals. In addition, 94% agreed they have a duty within the health care system to ensure AI is used responsibly.

"This years survey findings continue to validate how the responsible use of AI can help health systems strengthen and scale essential functions and reduce administrative burdens, all of which helps clinicians focus on their core mission of patient care," said Rick Hardy, chief executive officer, Optum Insight, the data and analytics business within Optum. "We share their enthusiasm for AI, but more importantly, we look forward to combining our health care expertise with AI to help people patients, physicians, and those working behind the scenes as that is where the real value is delivered."

A majority (89%) of health care executives surveyed believe the challenges in using AI in the health care industry require partnering with a health services company with expertise in data and analytics versus a technology-focused company, as the best way to address them.

AI Implementation Continues

With the COVID-19 pandemic as a backdrop, the survey responses point to an industry that remains steadfast in its approach to implementing AI: 85% of health care leaders say they have an AI strategy and 48% have implemented it, continuing the upward trend from last years results where 83% had an AI strategy and 44% had implemented it. Overall, 98% of health care organizations either have a strategy or are planning one.

Story continues

Easing Administrative Burdens, Focusing on Care

Nearly 3 in 4 health care leaders (72%) said they trust AI to support nonclinical, administrative processes that take away time clinicians could be spending with patients and delivering care. This is unchanged from the 71% who said they trust AI to support administrative tasks in 2020.

This years survey respondents also said they are excited about the potential for AI in improving patient outcomes in multiple ways, indicating the top three below:

Virtual patient care (41%)

Diagnosis and predicting outcomes (40%)

Medical image interpretation (36%)

In addition, health care leaders continue to be optimistic that AI technology will create work opportunities (55%) rather than reduce them (45%). This is similar to last year and up from 52% in 2019.

"The responsible use of AI continues to provide important opportunities for health care leaders to streamline administrative processes and provide more effective patient care with enhanced experiences for both patients and providers," said Steve Griffiths, senior vice president, data and analytics, Optum Labs, the research and development arm of UnitedHealth Group. "These leaders are not just users of AI, but they have an opportunity to be looked to as role models across industries in their commitment to using AI responsibly."

To learn more about the fourth annual Optum Survey on Artificial Intelligence (AI) in Health Care download the Special Report today.

About Optum

Optum is a leading information and technology-enabled health services business dedicated to helping make the health system work better for everyone. With more than 190,000 people worldwide, Optum delivers intelligent, integrated solutions that help to modernize the health system and improve overall population health. Optum is part of UnitedHealth Group (NYSE:UNH). For more information, visit http://www.Optum.com.

About Optum Insight

Optum Insight, part of Optum, connects the health care system with trusted services, analytics and platforms that make clinical and administrative processes valuable, easy and efficient. Optum Insight works with health systems, physicians, health plans, state governments and life sciences companies, as well as the rest of Optum and UnitedHealth Group, to set strategy, reduce administrative costs, drive action from data, improve clinical performance and transform operations.

About the Survey

The Optum AI Survey was conducted by Wakefield Research (www.wakefieldresearch.com) among 500 Senior Health Care Industry Executives defined as those VP level+ working in the health care industry and includes C-Level titles (CEO, COO, CFO, CTO, CMO), between Aug. 9 and Aug. 23, 2021, using an email invitation and an online survey.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211215005230/en/

Contacts

Optum Media Contact: Gwen Holliday202-549-3429gwen.m.holliday@optum.com

Read the original here:

AI Survey: Health Care Organizations Continue to Adopt Artificial Intelligence to Help Achieve Better, More Equitable and Affordable Patient Outcomes...

Posted in Ai | Comments Off on AI Survey: Health Care Organizations Continue to Adopt Artificial Intelligence to Help Achieve Better, More Equitable and Affordable Patient Outcomes…

SCALE AI announces four new AI research chairs that will undoubtedly stimulate artificial intelligence innovation in Canada – Yahoo Finance

Posted: at 9:58 am

MONTRAL, Dec. 14, 2021 /CNW Telbec/ - SCALE AI is proud to announce the creation of four new AI research chairs in major Canadian universities. These investments will help secure Canada's leadership in AI research by attracting and retaining some of the brightest young researchers in the field. They will join the ranks of some of the world's top experts in artificial intelligence and represent the future of this field. The new chairs are as follows:

The SCALE AI Research Chair in Artificial Intelligence for Urban Mobility and Logistics, at HEC Montral, held by Professor Carolina Osorio;

The SCALE AI Chair in Data Science for Retail, at McGill University, held by Professor Maxime Cohen;

The SCALE AI Chair in Data-Driven Supply Chains, at Polytechnique Montral, held by Professor Thibaut Vidal;

The Data-Driven Algorithms for Modern Supply Chains, at the University of Toronto, held by Professor Elias Khalil.

With $2 million in funding, provided in equal parts by SCALE AI and each of the four academic institutions, each chair will focus its research on applying AI to different aspects of supply chains. By 2023, SCALE AI will have contributed to the establishment of 10 AI research chairs across Canada, for a total investment of $20 million.

"As we build back better from this pandemic, these AI research chairs at major universities in Quebec and Ontario will help attract and retain the best talent and create new opportunities in Canada's digital economy," said the Honourable Franois-Philippe Champagne, Canada's Minister of Innovation, Science and Industry.

Julien Billot, CEO of SCALE AI, notes, "We have noticed the incredible impact of AI research chairs on our universities, as they create a favourable environment for the brightest minds to exchange their most creative and innovative ideas. We are proud to be supporting these chairs."

Hlne Desmarais, Co-Chair of the Scale AI Board of Directors, added, "Not only are the chairs supporting projects led by top artificial intelligence professors, they are allowing our universities to play an integral part in the Canadian AI ecosystem, making our country more competitive on a global scale."

Story continues

Four chairs, four innovative research focuses

The HEC Montral Scale AI Research Chair will be focused on simulation-based optimization (SO) for supply chain, logistics, and transportation problems, using state-of-the-art AI methods. Prof. Osorio will lead the development of algorithms to enable supply chain problems to be tackled both at scale and efficiently, such that the algorithms can be readily used by companies and public stakeholders to tackle their most pressing and challenging problems. "The goal of this chair is to allow the use of this detailed data to go beyond what-if analysis and to be used, in a systematic way, to optimize the design and the operations of intricate supply chains," mentioned Caroline Aub, Director of Research and Knowledge Transfer, HEC Montral.

McGill University's proposed research program focuses mainly on developing and deploying new data science methods and tools to help retailers, consumers, and society at large. Given that the vast majority of retailers have started collecting massive amounts of granular data, it is crucial to find effective ways to leverage this data. "Building upon Prof. Cohen's research on data science and retail, the goal of this five-year program is to leverage data science to improve retail practices, while maintaining a strong focus on social welfare and sustainability," mentioned James McGill Professor Yolande Chan, Dean of the Desautels Faculty of Management.

Polytechnique Montral aims to pursue within the Scale AI Chair in Data-Driven Supply Chains a research agenda focused on supply chains and transportation optimization with a strong emphasis on algorithmic transparency and explainability. "Our goal is to progress towards decision support algorithms combining the optimization strength of state-of-the-art metaheuristics and mathematical programming algorithms with essential extra qualities, such as accuracy, actionability, and interpretability," said Gilles Savard, Alternate Executive Director at Polytechnique Montral.

University of Toronto's proposed Research Chair will focus on the mathematical optimization problems underlying much of SCM, and will aim to create the next generation of ML-powered SCM algorithms to tackle Canada's challenges. Markus Bussmann, Chair & Professor, Mechanical Engineering at the University of Toronto, said, "The challenges we are targeting pertain to the consequences of COVID-19 for the medical equipment supply chain and vaccine distribution; evolving customer behaviours and expectations that must be addressed to improve Canadian companies' competitiveness in the global market; and the effects of climate change that threaten Canadian food security and require factoring environmental concerns into SCM decisions."

About SCALE AI (scaleai.ca)

The Canadian supercluster specializing in artificial intelligence (AI), based in Montral, SCALE AI acts as an investment and innovation hub that accelerates the rapid adoption and integration of AI and contributes to the development of a world-class Qubec and Canadian AI ecosystem.

Funded by the federal and Qubec governments, SCALE AI has nearly 120 industry partners, research institutes and other players in the AI field. SCALE AI develops programs aimed at supporting investment projects of companies that implement real-world AI applications, the emergence of future Canadian flagships in the sector, as well as the development of a skilled workforce.

SOURCE Scale AI

Cision

View original content: http://www.newswire.ca/en/releases/archive/December2021/14/c2817.html

More:

SCALE AI announces four new AI research chairs that will undoubtedly stimulate artificial intelligence innovation in Canada - Yahoo Finance

Posted in Ai | Comments Off on SCALE AI announces four new AI research chairs that will undoubtedly stimulate artificial intelligence innovation in Canada – Yahoo Finance

Can AI Help Make Social Media More Accessible, Inclusive And Safe? – Forbes

Posted: at 9:58 am

Ramit Sawhney, Sharechat's Lead AI Scientist

Ramit Sawhney is Lead AI Scientist at ShareChat, a rapidly-growing social media unicorn valued at over $3 billion. While ShareChat may not be a household name in the U.S., it is in South Asia. In all, over 160 million active monthly users including millions in hard-to-reach areas with low connectivity rely on ShareChat to share short videos, audio, photos and text with friends and family in over 15 languages.

With its roots firmly in India, ShareChat has a unique vantage point on the issues of ethics and fairness in AI. Its also the perfect home for Sawhney, who has written extensively about bias in machine learning with research papers on topics ranging from using machine learning (ML) to detect offensive Hindi-English tweets to multimodal neural financial models accentuating gender-based stereotypes. As the U.S. grapples with potential societal harms in social media in the aftermath of the recent Facebook files, this interview comes at an auspicious time for AI practitioners and industry watchers alike.

Arize: Whats your role at ShareChat and how does the company leverage AI in production today?

Sawhney: One of the key problems that ShareChat is trying to solve, and even the unique selling point of the product, is that it targets a very different demographic than you might see with other social media platforms. From a NLP perspective, the different demographic here represents low resource domains, Indic languages, tier two and tier three cities or regions in India where English is never spoken. In fact, with India being such a huge country, there are more than 100 different languages and dialects within just India itself. So the problem that arises is that we have a huge population, but there is not one single language for one single mode of communication that all of the people use.

And that's where ShareChat comes in, with a product that uniquely caters to this diverse audience. The goal for our data science and ML engineering team then becomes: how do we democratize AI to create a social media platform for people in low bandwidth areas people with totally different languages, in places where advertisements might be totally different?

Prior to joining ShareChat, much of my published research focused on social media usage. Ive long been interested in leveraging social media and AI to build a safer and more accessible space online, which means combating problems like preventing abuse and hate speech, identifying potential suicide ideation, and more.

At ShareChat, the goals are very similar. ShareChat wants to build a more inclusive, highly accessible and diverse platform. AI is central to those efforts. We're dealing with billions and billions of videos and pieces of user generated content on any given day, and the biggest problem were helping to solve is how to decide which videos should be served to the user.

That is my focus as lead AI scientist at ShareChat: leveraging AI for in-session personalization to deliver the best feed for all users across different languages, dialects and regions of India.

Arize: What are some of the biggest takeaways from your research and real-world practice with regards to ethics and fairness issues in AI?

Sawhney: When most people initially start out in ML and AI, theyre almost always chasing the best-performing model at times forgetting that other dimensions exist. Later, you realize that the best performance today might not necessarily be the best thing that translates to a product tomorrow.

My perspective as a researcher informs my approach at ShareChat. Its about having the right mindset, such as when you're trying to consider whether gender should be a parameter in your neural network model. If I were not exposed to research, I might have simply treated gender as a binary variable, or I wouldn't have even considered it as any special kind of feature. But after talking to different people globally, understanding how the community infers more sensitive aspects, I started going beyond and asking the right kinds of questions and listening. Such as: should gender just be a binary kind of label or should we think of it as a more diverse spectrum? Or how privacy-conscious do we want our models to be? We understand that the more data you can feed to these models, the more they can learn about the users to deliver the most relevant ads or the most engaging feed, but are those numbers single handedly enough to make sure that that's the best user experience?

Within India, our cultural diversity and richness makes these issues even more complicated because it becomes more difficult to slot users into certain categories. At ShareChat, were making a conscious effort to be more inclusive and serve more diverse intent, keeping in mind that the best experience does not need to be just the best and easily quantifiable results it could also be a more privacy-conscious, or more inclusive, or safer, space online.

Arize: Do you think companies or organizations should incentivize or have a systematic approach to tackling things like bias or hate speech, or global governance of AI in general?

Sawhney: This is a really relevant and important question for companies. At startups in particular, you often have a great open and collaborative kind of environment where competing pressures can make this question tricky. Companies find themselves asking: where do you want to push for performance and the best models first and where do you prioritize things like combating hate speech or toxic behavior? And is creating a safe space afterthought? Sadly, it is at some companies. The great part about ShareChat is that creating a safe space comes first, especially given our broader mission.

Of course, its a complicated task. With content moderation at ShareChat, we're not just looking at abuse detection in text which is something I've done in my prior research but also audio detection. So what about potentially profane or abusive language in any kind of audio and video for that matter? What about not-safe-for-work kind of videos, which could vary all the way from extreme violence to content that is just not appropriate?

Each of these questions presents a unique AI challenge, which opens up a very interesting dimension of how we want to have governance over the app. My personal opinion, which parallels ShareChats governance, is that safety should go hand-in-hand with things like performance and model development because at the end of the day even though it might impact the numbers it does create a much safer and open atmosphere. I've always preferred having a longer term vision. The best product or model is not necessarily the one where the immediate numbers are the best.

When taking on these issues, its also important to look outside the company for perspective. One of the first things I did upon joining ShareChat was to launch an AI Abuse Detection Challenge, which is open for global participation, to combat abusive text in over 15 languages.

In devising this challenge at ShareChat, I also wanted to realistically incorporate human content moderators as part of the process because it mirrors what happens in the real-world. I dont think were at the point where AI can completely replace humans in safety detection or even more sensitive issues like suicide ideation detection. But were at a point where AI can be the first step, reducing the load and mental strain on annotators and moderators. The ethics issues around subjecting human moderators to this content is also critical to keep in mind even though its not directly tied up to the performance metrics of a certain product or AI model.

Arize: How do you think about ML monitoring and observability of models in production?

Sawhney: It's an interesting question because its very different when you come from a research background into the industry. In research, you're not running a lot of models in production. It was very different to come into ShareChat and see a bunch of different models running simultaneously at this massive scale every single day.

So while a researcher might not need a dedicated application for ML monitoring or ML observability, when you start looking at tens or hundreds of models that are running every day with huge amounts of features, you start realizing that you need different kinds of monitoring applications around these systems. You want things in monitoring applications like alerting, the ability to track feature and other drift types and the ability to compare lots of models from one place.

And that goes hand in hand again with a whole idea of being more systematic about our models. A lot of times, at least from a research perspective, we give a lot of value to the ML model architectures and designing fancy, complex architectures. But a bigger problem in industry is: how do we maintain this, how often do we want to retrain this and how often do these models start drifting away from the behavior that we intended? That's where monitoring has become essential.

With the amount of scale ShareChat has in terms of data and its models, it becomes really important to not just focus on the actual model architectures but also on maintaining, deploying, and monitoring models. From that perspective, MLOps and ML monitoring platforms are definitely the future because ML itself is the future and we need the infrastructure to support it.

Read more from the original source:

Can AI Help Make Social Media More Accessible, Inclusive And Safe? - Forbes

Posted in Ai | Comments Off on Can AI Help Make Social Media More Accessible, Inclusive And Safe? – Forbes

Socrates.ai and Mercer Partner to Improve the Employee Experience – GlobeNewswire

Posted: at 9:58 am

WOODSIDE, Calif., Dec. 15, 2021 (GLOBE NEWSWIRE) -- Socrates.ai, a leading employee experience platform, today announced its new partnership with Mercer, a global talent consultancy and a wholly-owned subsidiary of Marsh & McLennan Companies, Inc. (NYSE: MMC). Under the terms of the relationship, the Mercer Belong platform will now automatically include Socrates.ai chat functionality to support all 150+ existing Belong customers as well as any new Belong customers moving forward. Mercer Belong will have the ability to serve as the "front door" for Belong customers' other applications such as Workday, ServiceNow, etc., through the chat experience supported in Belong.

The Socrates.ai Return on Experience Platform is a comprehensive enterprise-grade employee experience platform that delivers one place to go for anything an employee wants to ask or do from any digital channel such as MS Teams, Slack, intranets or text messaging. Socrates automatically processes all the federated sources of content in the enterprise and responds to questions with a single answer versus presenting an endless and frustrating selection of search results. Socrates also integrates with all customers' systems of record and personalizes the information and experience to the individual, maximizing a company's investment in their existing infrastructure, eliminating calls to the call center and enabling enterprises to deliver consumer experiences picking up where Single Sign-On (SSO) stops.

Melissa Swisher, CRO of Socrates.ai, commented, "We are thrilled to expand our partnership with Mercer. Mercer is at the forefront of helping companies move ahead in their digital transformation journeys and solutions to their customers. It is an honor to partner with Mercer and offer a world-class solution to Mercer customers. By Mercer and Socrates coming together, it marries best-in-class technology and digital transformation services that will exponentially impact customers' ROI and employee experiences.

Socrates.ai helps companies embrace the complex, rapidly changing world to address up to 90 percent of employee questions in one second or less. By integrating with the existing systems, applications and content used for work every day, Socrates.ai delivers a unified, simplified employee experience. To learn more, visit https://socrates.ai/how-it-works.

About Socrates.ai

Socrates.aibuilds on humanity in the best way possible via real conversations in real-time to deliver consumer experiences in the workplace. Socrates leverages artificial intelligence providing personalized answers employees need. Employees receive actionable information and can make updates through a single conversational experience instead of navigating multiple applications.

Since launching in 2017,Socrates.aihas raised more than $26 million in funding from leading venture capital firms and been named a Gartner "Cool" Vendor, Top 10 Virtual Assistant Solution Providers by CIO Magazine and Hot Startup by Business Insider, and one of six selected for Mercer's inaugural HRTech Incubator Program. To learn more, visitSocrates.ai and follow @SocratesAI on Twitter and LinkedIn.

Go here to see the original:

Socrates.ai and Mercer Partner to Improve the Employee Experience - GlobeNewswire

Posted in Ai | Comments Off on Socrates.ai and Mercer Partner to Improve the Employee Experience – GlobeNewswire

Page 81«..1020..80818283..90100..»