Page 39«..1020..38394041..5060..»

Category Archives: Ai

AI reskilling: A solution to the worker crisis – VentureBeat

Posted: May 25, 2022 at 3:51 am

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

By 2025, the World Economic Forum estimates that 97 million new jobs may emerge as artificial intelligence (AI) changes the nature of work and influences the new division of labor between humans, machines and algorithms. Specifically in banking, a recent McKinsey survey found that AI technologies could deliver up to $1 trillion of additional value each year. AI is continuing its steady rise and starting to have a sweeping impact on the financial services industry, but its potential is still far from fully realized.

The transformative power of AI is already impacting a range of functions in financial services including risk management, personalization, fraud detection and ESG analytics. The problem is that advances in AI are slowed down by a global shortage of workers with the skills and experience in areas such as deep learning, natural language processing and robotic process automation. So with AI technology opening new opportunities, financial services workers are eager to gain the skills they need in order to leverage AI tools and advance their careers.

Today, 87% of employees consider retraining and upskilling options at workplaces very important, and at the same time, more companies ranked upskilling their workforce as a top-5 business priority now than pre-pandemic. Companies that dont focus on powering AI training will fall behind in a tight hiring market. Below are some key takeaways for business leaders looking to prioritize reskilling efforts at their organization.

Any digital transformation requires leaders to focus their investments on two modern sources of competitive advantage: data and people. First, boosting data literacy across the organization helps line of business and domain experts (Sales, HR, Marketing, Financial Analysts, etc.) collaborate with AI and machine learning experts, which is critical to move beyond proof of concepts and experimentation.

For AI tools to be deployed at scale, those employees whose jobs involve interactions with AI systems need to understand how those systems work and what the constraints and limitations might be. Reskilling these individuals may include how to interpret the results of the AI/ML models or how to intervene with AI/ML experts when the results seem off.

A recent McKinsey study found that effective reskilling is 20% more cost-effective than a hiring and firing approach, and utilizing the right tools and technology can help companies accomplish their reskilling goals.

Importantly, before taking on any AI reskilling efforts, banks and financial services organizations need to first understand what outcome theyre driving towards and what skills are required. An employee self-assessment survey that focuses on necessary skills can help companies determine a customized curriculum and plan based on the existing skills gaps.

The notion of a one-size-fits-all training program or that employees need to take significant time away from the office to attend courses is no longer relevant. Utilizing digital learning platforms like Skillsoft, Udacity, or Udemy, or integrating content into mainstream work systems can make employees reskilling experiences more user-friendly. Platforms like WalkMe can help employees learn complex software systems quickly, and Axonify can deliver 5- to 10-minute microlearning sessions to employees within their daily workflow. For an even more customized approach, companies may opt to build their own programs with the help of industry consultants and professors who are experts in their field.

A Deloitte survey found that 94% of employees would stay at a company if it helped them develop and learn new skills, but only 15% can access learning opportunities directly related to their jobs. AI reskilling offers an immense opportunity for both financial services companies and their employees, but it can be daunting to consider monetary and time investments needed with reskilling efforts. The good news is that businesses can often utilize existing company tools instead of purchasing all new software.

Here are three excellent sources to help accelerate AI/ML training and implementation:

Investing in employees skills and knowledge can build a positive company culture and reduce turnover by boosting employees confidence and productivity, and it creates a more well-rounded workforce that increases teams effectiveness.

AI reskilling efforts can also help financial services organizations make better progress on their diversity, equity and inclusion methods by making learning more accessible to individuals who have faced barriers to higher education. To address this and the skills gap, banks including Bank of America, BBVA, Capital One, CIBC and JPMorgan Chase have invested in job training and reskilling efforts for their employees.

Bank of Americas career tools and resources have helped more than 21,000 employees find new roles at the company. Consistent training of new technologies and certifications are an investment in shaping the workforce of the future and will help to ensure that employees stay ahead of current trends and industry demands.

As a leader at an organization focused on data and AI, we always look to the data to show what we should prioritize internally and this includes what we should focus on in our AI reskilling efforts. When measuring the success of reskilling programs and initiatives, a recent LinkedIn study found that todays measures assessing the impact of training programs relied primarily on soft metrics, including completion rates, satisfaction scores and employee feedback.

This is a missed opportunity as company leaders can and should consider utilizing harder metrics that measure business value including increases in employee retention, productivity or revenue, to gain the most helpful insights from their reskilling initiatives. If its not working well, companies can consider bringing in new technologies or tools, or adjusting their program and overall experience to make it successful in the future, and by doing so, continue to stay ahead in the competitive war for talent.

In Jamie Dimons latest shareholder letter to JPMorgan investors, he points out: Our most important asset far more important than capital is the quality of our people. He continues, technology always drives change, but now the waves of technological innovation come in faster and faster.

Since companies that reskill their employees are more productive, produce positive economic returns and see increased employee satisfaction, theres no better time to start than now.

Junta Nakai, RVP and global industry leader of financial services at Databricks.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

View original post here:

AI reskilling: A solution to the worker crisis - VentureBeat

Posted in Ai | Comments Off on AI reskilling: A solution to the worker crisis – VentureBeat

AI Helps USNORTHCOM Mission With Rapid Distribution of Data – GovernmentCIO Media & Research

Posted: at 3:51 am

Commander Gen. Glen VanHerck said intelligent automation better protects the homeland by promptly disseminating raw data throughout the world.

U.S. Northern Command Commander U.S. Air Force Gen. Glen D. VanHerck speaks during a press briefing at the Pentagon July 28, 2021. Photo Credit: Defense Department/DVIDS

United States Northern Command (USNORTHCOM) is on a fast track to distributing more data to key decision-makers and commanders at a faster rate worldwide thanks to artificial intelligence (AI) and machine learning (ML).

The Pathfinder Program allows USNORTHCOM to take AI and ML capabilities and quickly spread data around the globe to gain time and space advantages while defending against cruise missiles.

According to Gen. Glen VanHerck, commander at USNORTHCOM and North American Aerospace Defense Command, USNORTHCOM previously only filtered 2% of available data that collected from radar systems across Canada and Alaska.

Now Pathfinder is taking 100% of the raw data, diffusing it for multiple sources and allowing it to be processed and disseminated for key decision-makers in theater.

But thats not good enough because thats just focused on one problem set, VanHerck said during a recent Defense Writers Group event. What I'm focusing on is a global look across all domains, infusing data, which were doing through our global information dominance experiment, which has demonstrated that those capabilities exist.

If you share the data to utilize machines that can count numbers, tell you when there are changes in vehicles in a command post, where the vehicles are going and then it can attach sensors and give you an alert that you should go look at this location, he added. What were doing is were not creating new data, were taking machines that can take existing data and analyze it quicker and alert you to it so you can create deterrents if you need to.

DOD launched its AI and Data Acceleration (ADA)initiative in June 2021to better curate and manage data. VanHerck believes the department isn't moving fast enough on programs like ADA and added that you cant apply industrial-based processes to software-driven capabilities in todays environment. DOD needs to move faster on ADA to provide data to senior leaders at the speed of relevancy, he said.

Were ready to field some of these capabilities specifically when youre focused at the operational to strategic level where what were trying to do is give increased decision space to our nations most senior leaders to develop defense options that in my mind lower the risk of an attack on our homeland, VanHerck said.

The general also discussed how AI and ML could help with cruise missile defense by deciphering sensor data and communicating the insights quickly to key decision-makers in theater.

We need to take the sensors we have today and potential new sensors and share that data and information and utilize artificial intelligence and machine learning to make that data and information available sooner than we have in the past and get it to decision-makers in a timely manner so they can create deterrence, he said.

This would create deterrent options before a bomber takes off or a submarine launch.

Before those assets ever launch, you can use the information environment to pick up the phone and publicly message and move troops into a deterred position so that they will have cause to even launch those platforms in the first place, VanHerck said. It would be fusing data and sensors to cue you to the potential launch of the cruise missile that may launch a thousand miles off the coast on a certain area. Then a space sensor sees that information and then cues additional sensors to provide domain awareness to decision makers.

Cyber threats also continue to be a top priority for USNORTHCOM after the Russian invasion of Ukraine. VanHerck said the U.S. faces cyber threats every day as state and non-state actors plant malware in critical infrastructure systems.

Were postured to get after that threat, and I expect that it wont go away, but will only grow as we move forward. Its an educational challenge to make sure that cyber hygiene is as good as it can be and to understand the threat and vulnerabilities that we actually have, VanHerck said.

Since the Russian invasion, USNORTHCOM has collaborated with the Cybersecurity and Infrastructure Security Agency (CISA) to analyze suspicious cyber activity.

Attribution is a challenge in the cyber domain, making certain that were ready before an attack and that we have proper attribution so we dont inadvertently escalate and create attention and friction we dont want to see, VanHerck said.

Follow this link:

AI Helps USNORTHCOM Mission With Rapid Distribution of Data - GovernmentCIO Media & Research

Posted in Ai | Comments Off on AI Helps USNORTHCOM Mission With Rapid Distribution of Data – GovernmentCIO Media & Research

The walls are closing in on Clearview AI – MIT Technology Review

Posted: at 3:51 am

Europe is working on an AI law that could ban the use of real-time remote biometric identification systems, such as facial recognition, in public places. The current drafting of the text restricts the use of facial recognition by law enforcement unless it is to fight serious crimes, such as terrorism or kidnappings.

There is a possibility that the EU will go further. The EUs influential data protection watchdogs have called for the bill to ban not only remote biometric identification in public, but the police use of web-scraped databases, such as Clearview AIs.

Clearview AI is fast becoming so toxic that no credible law enforcement agency or public authority or other company will want to work with them, says Ella Jakubowska, who works on facial recognition and biometrics for European Digital Rights, a digital rights group.

Hoan Ton-That, Clearview AI's CEO, said he is disappointed the ICO has misinterpreted my technology and intentions.

We collect only public data from the open internet and comply with all standards of privacy and law, he said in a statement that was sent to MIT Technology Review.

I would welcome the opportunity to engage in conversation with leaders and lawmakers so the true value of this technology which has proven so essential to law enforcement can continue to make communities safe, he added.

Read this article:

The walls are closing in on Clearview AI - MIT Technology Review

Posted in Ai | Comments Off on The walls are closing in on Clearview AI – MIT Technology Review

AI and IoT device connects with concierge platform for RPM – Healthcare IT News

Posted: at 3:51 am

Healthcare workflow technology company Braidio has partnered withhealth IT vendor BlueSemi with the aim of creating a preventive home-health and telehealth ecosystem.

BlueSemi's primary product is a handheld device calledEYVA,which measures six vitals, including blood glucose, without requiring a pinprick for blood. By integrating the device with Braidio's My Health Concierge platform, the companies will be attempting to encourage positive behavioral change.

This is the next step from Braidio's My Health Concierge, which is partnered with other companies, including Arkos Health, AT&T and Etisalat. My Health Concierge contains embedded healthcare workflows, including triggered events and write-back capabilities to major EHRs.

With BlueSemi tracking critical vitals like blood glucose, the two companies now can ensure that healthcare providers have the latest data on the people they care for. Braidio's remote patient monitoring helps medical professionals get the latest patient data and insights.It is designed to help eliminate barriers that previously prevented ease of access to care.

"Healthcare has been slow to adopt the connectivity that is so prevalent and innovative in other industries," said Iain Scholnick, CEO of Braidio. "The flow of patient information needs to be completely secure and HIPAA-compliant in order to ensure that doctors can give the best recommendations and provide the best care.

"This level of transparency will deliver healthcare to the future, breaking down the barriers between patients and the highest level of care possible," he contended.

EYVA leverages sensor fusion, AI and the Internet of Things. The device's haptic sensors respond to the user's touch. It can mirrortheir breathing patterns to eventually measure blood glucose, ECG, heart rate, blood pressure, SpO2 and HbA1c in 60 seconds. No blood is required, because users simply touch the device in the waythey touch their phone screens.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more from the original source:

AI and IoT device connects with concierge platform for RPM - Healthcare IT News

Posted in Ai | Comments Off on AI and IoT device connects with concierge platform for RPM – Healthcare IT News

5 ways to reduce compliance costs with AI and automation – CIO

Posted: at 3:51 am

While regulations are created to protect consumers and markets, theyre often complex, making them costly and challenging to adhere to.

Highly regulated industries like Financial Services and Life Sciences have to absorb the most significant compliance costs. Deloitte estimates that compliance costs for banks have increased by 60% since the financial crisis of 2008, and the Risk Management Associationfound that 50% of financial institutions spend 6 to 10% of their revenues on compliance.

Artificial intelligence (AI) and intelligent automation processes, such as RPA (robotic process automation) and NLP (natural language processing) can help drive efficiencies up and costs down in meeting regulatory compliance. Heres how:

In a single year, a financial institution may have to process up to 300 million pages of new regulations, disseminated from multiple state, federal, or municipal authorities across a variety of channels. The manual work of collecting, sorting, and understanding these changes and mapping them to the appropriate business area is extremely time consuming.

While RPA can be programmed to collect regulation changes, the regulations also need to be understood and applied to business processes. This is where sophisticated OCR (optical character recognition), NLP, and AI models come in.

All these capabilities can save an analyst a significant amount of time, thereby reducing costs.

One of the biggest time drains in regulatory reporting is figuring out what needs to be reported, when, and how. This requires analysts to not only review the regulations, but interpret them, write text on how the regulations apply to their business, and translate it into code in order to retrieve the relevant data.

Alternatively, AI can quickly parse unstructured regulatory data to define reporting requirements, interpret it based on past rules and situations, and produce code to trigger an automated process to access multiple company resources to build the reports. This approach to regulatory intelligence is gaining traction to support Financial Services reporting as well as Life Sciences companies where submissions are required for new product approvals.

The process of selling in highly regulated markets requires marketing material to be compliant. Yet, the process of approving the continuous flow of new marketing materials can be burdensome.

Pharmas trend toward personalized marketing content is driving up compliance costs at an exponential rate as compliance officers need to ensure that each piece of content is consistent with drug labels and regulations. Because adding manpower to scale these strategies comes with a significant cost increase, AI is now used to scan content and determine compliance more quickly and efficiently. In some cases, AI bots are even being used to edit and write regulation-compliant marketing copy.

Traditional rules-based transaction monitoring systems in Financial Services are prone to producing excessive false positives. In some cases, false positives have reached 90%, with each alert requiring review by a compliance officer.

By integrating AI into legacy transaction monitoring systems, erroneous compliance alerts can be minimized and review costs reduced. Issues that are deemed legitimate high-risk can be elevated to a compliance officer while those that are not can be automatically resolved. With compliance officers only working on high-risk flagged transactions, these resources can be redeployed where they can add more value. As new trends are identified, AI can also be used to update traditional rules engines and monitoring systems.

To limit criminal activity and money laundering, banks need to perform due diligence to ensure new customers are law-abiding and remain that way throughout the relationship. Depending on the risk level of certain individuals, background checks can range from two to 24 hours. Much of this time is spent collecting documents, checking databases, and reviewing media outlets. AI and automation can streamline this process. Bots can be used to crawl the web for mention of a client and leverage sentiment analysis to flag negative content. NLP technologies can scan court documents for signs of illegal activity and media mentions most relevant for analysis.

Read the rest here:

5 ways to reduce compliance costs with AI and automation - CIO

Posted in Ai | Comments Off on 5 ways to reduce compliance costs with AI and automation – CIO

Specialization is key in an exploding AI chatbot market – VentureBeat

Posted: at 3:51 am

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Amid an exploding market for AI chatbots, companies that target their virtual assistants to specialized enterprise sectors may get a firmer foothold than general chatbots, according to Gartner analysts. Thats not news to Zowie, which claims to be the only AI-powered chatbot technology specifically built for ecommerce companies that use customer support to drive sales no small feat in an industry in which customer service teams answer tens of thousands of repetitive questions daily. Today, the company announced it has secured $14 million in series A funding led by Tiger Global Management, bringing its total to $20 million.

Chatbots sometimes referred to as virtual assistants are built on conversational AI platforms (CAIP) that combine speech-based technology, natural language processing and machine learning tools to develop applications for use across many verticals.

The CAIP market is extremely diverse, both in vendor strategies and in the enterprise needs that vendors target, says Magnus Revang, a Gartner research vice president covering AI, chatbots and virtual assistants.

The CAIP market is comprised of about 2,000 vendors worldwide. As a result, companies looking to implement AI chatbot technology often struggle with vendor selection, according to a 2020 Gartner report coauthored by Revang, Making Sense of the Chatbot and Conversational AI Platform Market.

The report points out that In a market where no one CAIP vendor is vastly ahead of the pack, companies will need to select the provider best fit for their current short and midterm needs.

That is Zowies secret sauce: specialization. Specifically, the company focuses on the needs of ecommerce providers, Maya Schaefer, Zowies chief executive officer and cofounder, told VentureBeat. The platform enables brands to improve their customer relationships and start generating revenue from customer service.

Plenty of other CAIPs provide services for companies that sell products. But their solutions are also targeted to other verticals, such as banking, telecom and insurance. Examples include Boost AI,Solvvy and Ada. Other chatbots Ada is an example can also be geared for use in the financial technology and software-as-a-service industries to answer questions, for instance, about a non- functioning system.

Zowie is built using the companys automation technology, Zowie X1, to analyze the meaning and sentiment to find repetitive questions and trends.

Zowie claims to automate 70% of inquiries ecommerce brands typically receive, such as wheres my package? or how can I change my shipping address? she says. The solution also includes a suite of tools that allows agents to provide personalized care and product recommendations, she says.

For example, if a customer says, I would like help choosing new shoes the system hands the request to a live product expert.

Before implementation, the platform analyzes a customers historical chats, frequently asked questions and knowledge base to automatically learn which questions to automate. It uses AI capabilities to analyze new questions and conversations, delivering more automation.

By analyzing patterns, the AI chatbot can tell when something new or unusual is happening and alerts the customer service team, Schaefer said.

Live agents may also have difficulties upselling customers, so Zowie gives unique insights about customers to agentswhat product they are looking at, did they order before and if so, what did they order and has a direct product catalog integration which enables agents to send product suggestions in the form of a carousel, she added.

In 2019, Schaefer and cofounder Matt Ciolek developed the all-in-one, modular CAIP designed for building highly customizable chatbots.Schaefer estimates that within six weeks of implementation, Zowie answers about 92% of customer inquiries such as wheres my package? and what are the store hours?

Managers and agents dont have to think about how to improve customer experience our system will detect new trends and propose how to optimize the system automatically, she said. In a way, we automate the automation.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Read more:

Specialization is key in an exploding AI chatbot market - VentureBeat

Posted in Ai | Comments Off on Specialization is key in an exploding AI chatbot market – VentureBeat

This New AI Can Detect The Calls of Animals Swimming in an Ocean of Noise – ScienceAlert

Posted: at 3:51 am

The ocean is swimming in sound, and a new artificial intelligence tool could help scientists sift through all that noise to track and study marine mammals.

The tool is called DeepSqueak, not because it measures dolphin calls in the ocean underworld, but because it is based on a deep learning algorithm that was first used to categorize the different ultrasonic squeals of mice.

Now, researchers are applying the technology to vast datasets of marine bioacoustics.

Given that much of the ocean is out of our physical reach, underwater sound could help us understand where marine mammals swim, their density and abundance, and how they interact with one another.

Already, recordings of whale songs have helped identify an unknown population of blue whalesin the Indian Ocean and a never-before-heard species of beaked whale.

But listening to recordings of the ocean, and trying to pick out animal noise from hours of waves, wind, and boat engines is slow and painstaking work.

That's where DeepSqueak comes in. The technologywas recently presented at the 182nd Meeting of the Acoustical Society of America, and isdesigned to classify underwater acoustic signals faster and more accurately than any other method to date.

DeepSqueak combs through sound data in the ocean and creates what look like heat maps, based on where certain acoustic signals are heard and at what frequency.

Those signals are then sourced to a specific animal.

"Although we used DeepSqueak to detect underwater sounds, this user-friendly, open source tool would be useful for a variety of terrestrial species," says Elizabeth Ferguson, the CEO and founder of Ocean Science Analytics, who presented the research.

"The capabilities of call detection extend to frequencies below the ultrasonic sounds it was originally intended for. Due to this and the capability of DeepSqueak to detect variable call types, development of neural networks is possible for many species of interest."

Marine acoustic noise has never been easier to collect, but as hours of ocean soundscapes pile up in databases around the world, scientists need to figure out how to use that information most effectively.

DeepSqueak could be a possible alternative to the human ear, allowing researchers to classify sounds and study them right around the world with incredible efficiency.

The fully automated tool has consistently been able to detect the calls of specific marine mammals, like humpback whales, delphinids and fin whales, during tests.

It can also pick out these animals' calls amongst background noise, which is important given that anthropogenic sound is turning up the volume in the ocean.

DeepSqueak was first introduced in 2019 as a way to analyze the rich repertoire of ultrasonic vocalizations employed by rats and mice.

Sifting through a series of squeaky recordings, the tool was able to identify a wide range of syllabic sounds, and these short mouse calls appear to be arranged in different ways depending on the context in which they are used.

The results could help scientists study how certain syllables and syntax may communicate unique information in the mouse world.For instance, the sounds a mouse makes in some situations could be used to convey fear, anxiety, or depression.

By reliably linking contextual information to certain vocal signals, DeepSqueak could allow scientists to better study the nuances between animal vocalizations and behavior even in remote ocean underworlds where some of the planet's most elusive animals swim.

The research will be presented at the 182nd Meeting of the Acoustical Society of America.

Read this article:

This New AI Can Detect The Calls of Animals Swimming in an Ocean of Noise - ScienceAlert

Posted in Ai | Comments Off on This New AI Can Detect The Calls of Animals Swimming in an Ocean of Noise – ScienceAlert

Copilot, GitHubs AI-powered coding tool, will be free for students – TechCrunch

Posted: at 3:51 am

Last June, Microsoft-owned GitHub and OpenAI launched Copilot, a service that provides suggestions for whole lines of code inside development environments like Microsoft Visual Studio. Available as a downloadable extension, Copilot is powered by an AI model called Codex thats trained on billions of lines of public code to suggest additional lines of code and functions given the context of existing code. Copilot can also surface an approach or solution in response to a description of what a developer wants to accomplish (e.g. Say hello world), drawing on its knowledge base and current context.

While Copilot was previously available in technical preview, itll become generally available starting sometime this summer, Microsoft announced at Build 2022. Copilot will also be available free for students as well as verified open source contributors. On the latter point, GitHub said itll share more at a later date.

The Copilot experience wont change much with general availability. As before, developers will be able to cycle through suggestions for Python, JavaScript, TypeScript, Ruby, Go and dozens of other programming languages and accept, reject or manually edit them. Copilot will adapt to the edits developers make, matching particular coding styles to autofill boilerplate or repetitive code patterns and recommend unit tests that match implementation code.

Copilot extensions will be available for Noevim and JetBrains in addition to Visual Studio Code, or in the cloud on GitHub Codespaces.

One new feature coinciding with the general release of Copilot is Copilot Explain, which translates code into natural language descriptions. Described as a research project, the goal is to help novice developers or those working with an unfamiliar codebase.

Earlier this year we launched Copilot Labs, a separate Copilot extension developed as a proving ground for experimental applications of machine learning that improve the developer experience, Ryan J. Salva, VP of product at GitHub, told TechCrunch in an email interview. As a part of Copilot Labs, we launched explain this code and translate this code. This work fits into a category of experimental capabilities that we are testing out that give you a peek into the possibilities and lets us explore use cases. Perhaps with explain this code, a developer is weighing into an unfamiliar codebase and wants to quickly understand whats happening. This feature lets you highlight a block of code and ask Copilot to explain it in plain language. Again, Copilot Labs is intended to be experimental in nature, so things might break. Labs experiments may or may not progress into permanent features of Copilot.

Copilots new feature, Copilot Explain, translates code into natural language explanations. Image Credits: Copilot

Owing to the complicated nature of AI models, Copilot remains an imperfect system. GitHub warns that it can produce insecure coding patterns, bugs and references to outdated APIs, or idioms reflecting the less-than-perfect code in its training data. The code Copilot suggests might not always compile, run or even make sense because it doesnt actually test the suggestions. Moreover, in rare instances, Copilot suggestions can include personal data like names and emails verbatim from its training set and worse still, biased, discriminatory, abusive, or offensive text.

GitHub said that its implemented filters to block emails when shown in standard formats, and offensive words, and that its in the process of building a filter to help detect and suppress code thats repeated from public repositories. While we are working hard to make Copilot better, code suggested by Copilot should be carefully tested, reviewed, and vetted, like any other code, the disclaimer on the Copilot website reads.

While Copilot has presumably improved since its launch in technical preview last year, its unclear by how much. The capabilities of the underpinning model, Codex a descendent of OpenAIs GPT-3 have since been matched (or even exceeded) by systems like DeepMinds AlphaCode and the open source PolyCoder.

We are seeing progress in Copilot generating better code Were using our experience with [other] tools to improve the quality of Copilot suggestions e.g., by giving extra weight to training data scanned by CodeQL, or analyzing suggestions at runtime, Salva asserted CodeQL referring to GitHubs code analysis engine for automating security checks. Were committed to helping developers be more productive while also improving code quality and security. In the long term, we believe Copilot will write code thats more secure than the average programmer.

The lack of transparency doesnt appear to have dampened enthusiasm for Copilot, which Microsoft said today suggests about 35% of the code in languages like Java and Python was generated by the developers in the technical preview. Tens of thousands have regularly used the tool throughout the preview, the company claims.

See the original post here:

Copilot, GitHubs AI-powered coding tool, will be free for students - TechCrunch

Posted in Ai | Comments Off on Copilot, GitHubs AI-powered coding tool, will be free for students – TechCrunch

The case for placing AI at the heart of digitally robust financial regulation – Brookings Institution

Posted: at 3:51 am

Data is the new oil. Originally coined in 2006 by the British mathematician Clive Humby, this phrase is arguably more apt today than it was then, as smartphones rival automobiles for relevance and the technology giants know more about us than we would like to admit.

Just as it does for the financial services industry, the hyper-digitization of the economy presents both opportunity and potential peril for financial regulators. On the upside, reams of information are newly within their reach, filled with signals about financial system risks that regulators spend their days trying to understand. The explosion of data sheds light on global money movement, economic trends, customer onboarding decisions, quality of loan underwriting, noncompliance with regulations, financial institutions efforts to reach the underserved, and much more. Importantly, it also contains the answers to regulators questions about the risks of new technology itself. Digitization of finance generates novel kinds of hazards and accelerates their development. Problems can flare up between scheduled regulatory examinations and can accumulate imperceptibly beneath the surface of information reflected in traditional reports. Thanks to digitization, regulators today have a chance to gather and analyze much more data and to see much of it in something close to real time.

The potential for peril arises from the concern that the regulators current technology framework lacks the capacity to synthesize the data. The irony is that this flood of information is too much for them to handle. Without digital improvements, the data fuel that financial regulators need to supervise the system will merely make them overheat.

Enter artificial intelligence.

In 2019, then-Bank of England Gov. Mark Carney argued that financial regulators will have to adopt AI techniques in order to keep up with the rising volumes of data flowing into their systems. To dramatize the point, he said the bank receives 65 billion pieces of data annually from companies it oversees and that reviewing it all would be like each supervisor reading the complete works of Shakespeare twice a week, every week of the year.

That was three years ago. The number is almost certainly higher today. Furthermore, the numbers he cited only covered information reported by regulated firms. It omitted the massive volumes of external Big Data generated from other sources like public records, news media, and social media that regulators should also be mining for insight about risks and other trends.

AI was developed over 70 years ago. For decades, enthusiasts predicted that it would change our lives profoundly, but it took awhile before AI had much impact on everyday lives.1 AI occasionally made news by performing clever feats, like IBMs Watson besting human champions at Jeopardy in 2011, or AIs beating masters of complex games like chess (in 1996) and Go (in 2017). However, it was only recently that such machines showed signs of being able to solve real-world problems. Why is that?

A key answer is that, until only recently, there wasnt enough data in digitized formformatted as computer-readable codeto justify using AI.2 Today, there is so much data that not only can we use AI, but in many fields like financial regulation we have to use AI simply to keep up.

As discussed further below, financial regulators around the world are in the early stages of exploring how AI and its sub-branches of Machine Learning (ML), Natural Language Processing (NLP), and neural networks, can enhance their work. They are increasingly weighing the adoption of supervisory technology (or suptech) to monitor companies more efficiently than they can with analog tools. This shift is being mirrored in the financial industry by a move to improve compliance systems with similar regulatory technology (regtech) techniques. Both processes are running on a dual track, with one goal being to convert data into a digitized form and the other to analyze it algorithmically. Meeting either of these objectives without the other has little value. Together, they will transform both financial regulation and compliance. They offer the promise that regulation, like everything else that gets digitized, can become better, cheaper, and faster, all at once.

Financial regulators around the world have generally been more active in regulating industrys use of AI than adopting it for their own benefit. Opportunities abound, however, for AI-powered regulatory and law enforcement tactics to combat real-world problems in the financial system. In a later section, this paper will look at the primary emerging use cases. Before doing so, it is worth taking a look at some areas of poor regulatory performance, both past and present, and ask whether AI could have done better.

One example is the $800 billion Paycheck Protection Program that Congress established in 2020 to provide government-backed loans for small businesses reeling from the pandemic. More than 15% of PPP loans representing $76 billioncontained evidence of fraud, according to a study released last year. Many cases involved loan applicants using fake identities. Imagine if the lenders submitting loan guarantee applications or the Small Business Administration systems that were reviewing them had had mature AI-based systems that could have flagged suspicious behavior. They could have spotted false statements and prevented fraudulent loans, thereby protecting taxpayer money and ensuring that their precious funds helped small businesses in need instead of financing thieves.

Two examples can be found from the war in Ukraine. The Russian invasion has sparked a whole new array of sanctions against Russian oligarchs who hide riches in shell companies and are scrambling to move their money undetected. Financial institutions are required to screen accounts and transactions to identify transactions by sanctioned entities. What if they and law enforcement agencies like the Financial Crimes Enforcement Network (FinCEN) had AI-powered analytics to pull and pool data from across the spectrum of global transactions and find the patterns revealing activity by sanctioned parties? Unfortunately, most financial institutions and government agencies do not have these tools in hand today.

The second example comes from the rapid flight of millions of refugees attracting human traffickers to the countrys borders seeking to ensnare desperate women and children and sell them into slavery for work and sex. Banks are required by law to maintain anti-money laundering (AML) systems to detect and report money movement that may indicate human trafficking and other crimes, but these systems are mostly analog and notoriously ineffective. The United Nations Office on Drugs and Crime estimates that less than 1% of financial crime is caught. AI-powered compliance systems would have a far better chance of flagging the criminal rings targeting Ukraine. If such systems had been in effect in recent years, moreover, the human trafficking trade might not be flourishing. As it stands today, an estimated 40 million people are being held captive in modern human slavery, and one in four of them is a child.

In another thought experiment, what if bank regulators in 2007 had been able to see the full extent of interrelationships between subprime mortgage lenders and Wall Street firms like Bear Stearns, Lehman Brothers, and AIG? If regulators had been armed with real-time digital data and AI analytics, they would have been monitoring risk contagion in real time. They might have been able to avert the financial crisis and with it, the Great Recession.

Finally, what about fair lending? In 1968, the United States outlawed discrimination on the basis of race, religion and other factors in mortgage lending through the passage of the Fair Housing Act.3 With the later passage of the Equal Credit Opportunity Act and Housing and Community Development Act, both in 1974, Congress added sex discrimination to that list and expanded fair-lending enforcement to all types of credit, not just mortgages.4 That was nearly 50 years ago.

These laws have gone a long way toward combating straightforward, overt discrimination but have been much less effective in rooting out other forms of bias. Lending decisions still produce disparate impacts on different groups of borrowers, usually in ways that disproportionately harm protected classes like people of color. Some of this arises from the fact that high volume credit decisioning must rely on efficient measures of creditworthiness, like credit scores, that in turn rely on narrow sources of data.5 What if, 40 years ago, both regulators and industry had been able to gather much more risk data and analyze it with AI? How many more people would have been deemed creditworthy instead of having their loan denied? Over four decades, could AI tools have changed the trajectory of racial opportunity in the United States, which currently includes a $10 trillion racial wealth gap and the African-American homeownership rate lagging that of whites by 30 percentage points?

In his 2018 book titled Unscaled, venture capitalist Hemant Taneja argued that exploding amounts of data and AI will continue to produce unprecedented acceleration of our digital reality. In another ten years anything that AI doesnt power will seem lifeless and outmoded. It will be like an icebox after electric-powered refrigerators were invented, he wrote.

Tanejas estimated time horizon is now only six years away. In the financial sector, this sets up a daunting challenge for regulators to design and construct sufficiently powerful suptech before the industrys changing technology could overwhelm their supervisory capacity. Fortunately, regulators in the U.S. and around the world are taking steps to narrow the gap.

Arguably the global leader in regulatory innovation is the United Kingdoms Financial Conduct Authority (FCA). In 2015, the FCA established the Project Innovate initiative, which included the creation of a regulatory sandbox for private sector firms to test new products for their regulatory impact. A year later, the FCA launched a regtech unit that developed what the agency called techsprintsan open competition resembling a tech hackathon in which regulatory, industry, and issue experts work side-by-side with software engineers and designers to develop and present tech prototypes for solving a particular regulatory problem. The innovation program has since been expanded into a major division within the FCA.6

The FCA has been able to translate this relatively early focus on digital innovation into real-world problem solving. In 2020, a senior agency official gave a speech about how the FCA uses machine learning and natural language processing to monitor company behaviors and spot outlier firms as part of a holistic approach to data analysis. Similar strides have been made in other countries, including Singapore and Australia.

U.S. regulators for the most part have made slower progress incorporating AI technologies in their monitoring of financial firms. All of the federal financial regulatory bodies have innovation programs in some form. Most of them, however, have focused more on industry innovation than their own. The U.S. banking agenciesConsumer Financial Protection Bureau, Federal Deposit Insurance Corporation, Federal Reserve Board and Office of the Comptroller of the Currencyall have innovation initiatives that are largely outward-facing, aimed at understanding new bank technologies and offering a point of contact on novel regulatory questions. They all also expanded their technology activities during the COVID-19 pandemic, spurred by the sudden digital shifts underway in the industry and their own need to expand offsite monitoring. Several agencies also have suptech projects underway. These, however, generally have limited reach and do not address the need for agencies to revisit their foundational, analog-era information architecture.

This is beginning to change. The Federal Reserve in 2021 created the new position of Chief Innovation Officer and hired Sunayna Tuteja from the private sector, charging her to undertake a sweeping modernization of the Feds data infrastructure. The FDIC, too, has closely examined its own data structures, and the OCC has worked on consolidating its examination platforms. These are productive steps, but they still lag the advanced thinking underway in other parts of the world. U.S. regulators have yet to narrow the gap between the accelerating innovation in the private sector and their own monitoring systems.

Other U.S. regulatory agencies have embraced AI technologies more quickly. In 2017, Scott Bauguess, the former deputy chief economist at the Securities and Exchange Commission (SEC), described his agencys use of AI to monitor securities markets. Soon after the financial crisis, he said, the SEC began simple text analytic methods to determine if the agency could have predicted risks stemming from credit default swaps before the crisis. SEC staff also applies machine-learning algorithms to identify reporting outliers in regulatory filings.

Similarly, the Financial Industry Regulatory Authority (FINRA)the self-regulatory body overseeing broker-dealers in the U.S.uses robust AI to detect possible misconduct.7 The Commodity Futures Trading Commission (CFTC), meanwhile, has been a leader through its LabCFTC program, which addresses both fintech and regtech solutions. Former CFTC Chairman Christopher Giancarlo has said that the top priority of every regulatory body should be to digitize the rulebook.8 Lastly, the Treasury Departments Financial Crimes Enforcement Network (FinCEN) launched an innovation program in 2019 to explore regtech methods for improving money-laundering detection.9 The agency is now in the process of implementing sweeping technology mandates it received under the Anti-Money Laundering Act of 2020, a great opportunity to implement AI to better detect some of the financial crimes discussed above.

If government agencies supplanted their analog systems with a digitally native design, it would optimize the analysis of data that is now being under-utilized. The needles could be found in the haystack, fraudsters and money launderers would have a harder time hiding their activity, and regulators would more completely fulfill their mission of maintaining a safer and fairer financial system.

Below are specific use cases for incorporating AI in the regulatory process:

Arguably the most advanced regtech use case globally is anti-money laundering (AML). AML compliance costs the industry upwards of $50 billion per year in the U.S., as most banks rely on rules-based transaction monitoring systems.10 These methods help them determine which activity to report to FinCEN as suspicious but currently produce a false-positive rate of over 90%. This suggests banks, regulators, and law enforcement authorities are spending time and money chasing down potential leads but not really curbing illicit financial crimes. The AML data that law enforcement agencies currently receive contains too much unimportant information and is not stored in formats to help identify patterns of crime.11

Financial regulators around the world have generally been more active in regulating industrys use of AI than adopting it for their own benefit.

In addition to the challenges associated with locating financial crimes among the massively complex web of global transactions, banks also must perform identity verification checks on new customers and submit beneficial owner data to FinCEN to prevent launderers from hiding behind fake shell companies. The war in Ukraine and toughening of sanctions on Russian oligarchs has highlighted the need for better screening mechanisms to restrict the financial activity of individuals that appear on sanctions lists. While a growing industry of regtech firms are attempting to help financial institutions more efficiently comply with Know-Your-Customer (KYC) rules, FinCEN is in the midst of implementing legislative reforms requiring corporations to submit data to a new beneficial owner database.

In 2018 and 2019, the FCA held two international tech sprints aimed at addressing AML challenges. The first sprint dealt with enabling regulators and law enforcement to share threat information more safely and effectively. The second focused on Privacy-Enhancing Technologies, or PETs, of various kinds. For example, homomorphic encryption is a technique that shows promise for enabling data shared through AML processes to be encrypted throughout the analytical process, so that the underlying information is concealed from other parties and privacy is preserved. Another PET technique known as zero-knowledge proof enables one party to ask another essentially a yes-or-no question without the need to share the underlying details that spurred the inquiry. For example, one bank could ask another if a certain person is a customer, or if that person engaged in a certain transaction. Techniques like this can be used to enable machine-learning analysis of laundering patterns without compromising privacy or potentially undermining the secrecy of an ongoing investigation.

The SBA did make efforts to evaluate AI tools to detect fraud in PPP loans, looking to certain AI-powered fintech lenders. Nevertheless, the small business loan program was still rife with fraud. (In fact, some of the attention regarding fraud concerns has centered on loans processed by fintech firms.12) Several studies show that effective use of machine learning in credit decisioning can more easily detect when, for example, loan applications are submitted by fake entities.

One of the biggest fraud threats facing financial institutions is the use of synthetic identities by bad actors. These are created by combining real customer information with fake data in a series of steps that can fool normal detection systems but can often be caught by regtech analysis using more data and machine learning.

Many regtech solutions for fighting money laundering grew out of technology for identifying fraud, which has generally been more advanced. This may be because the industry has an enormous financial interest in preventing fraud losses. It may also reflect the fact that, in fraud, firms are usually dealing with the certainty of a problem, whereas in AML, they usually never know whether the Suspicious Activity Reports they file with FinCEN lead to something useful. These factors make it all the more important to equip banks and their regulators with tools that can more easily, and less expensively, detect patterns of crime.

U.S. consumer protection law bans Unfair and Deceptive Acts and Practices (UDAP), both in the financial sector and overall, and adds the criterion of abusive activity for purposes of enforcement by the Consumer Financial Protection Bureau (UDAAP). However, enforcement of subjective standards like unfairness and deception is challenging, often hampered by the difficulty of detecting and analyzing patterns of potentially illegal behavior. As with discrimination, UDAAP enforcement relies on considerable subjective judgment in distinguishing activities that are against the law from more benign patterns. This also makes compliance difficult. AI-based regtech can bring to bear the power of more data and AI analytical tools to solve these challenges, allowing regulators to detect and prove violations more easily. It might also enable them to issue more clear and concrete guidanceincluding more sophisticated standards on statistical modelingto help industry avoid discrimination and being responsible for UDAAPs.

There is a growing recognition among advocates that full financial inclusion, especially for emerging markets, requires greatly expanded use of digital technology. Access to cell phones has, in effect, put a bank branch in the hands of two-thirds of the worlds adults. This unprecedented progress has, in turn, highlighted barriers to further success, most of which could be solved or ameliorated with better data and AI.

One is the problem of AML de-risking. As noted above, banks must follow Know-Your-Customer (KYC) rules before accepting new customers, a process that includes verifying the persons identity. In many developing countries, poor peopleand particularly womenlack formal identity papers like birth certificates and drivers licenses, effectively excluding them from access to the formal financial system.13 In some parts of the world, the regulatory pressure on banks to manage risk associated with taking on new customers has resulted in whole sectorsand, in some countries, the entire populationbeing cut off from banking services.14 In reality, these markets include millions of consumers who would be well-suited to opening an account and do not present much risk at all. Banks and regulators struggle with how to distinguish high-risk individuals from those who are low risk. A great deal of work is underway in various countries to solve this problem more fully with AI, through the use of digital identity mechanisms that can authenticate a persons identity via their digital footprints.

A related challenge is that expanded financial inclusion has produced increased need for better consumer protection. This is especially important for people who are brought into the financial system by inclusion strategies and who may lack prior financial background and literacy, making them vulnerable to predatory practices, cyber scams, and other risks. Regulators are using AI chatbots equipped with NLP to intake and analyze consumer complaints at scale and to crawl the web for signs of fraudulent activity.

One example is the RegTech for Regulators Accelerator (R2A) launched in 2016 with backing from the Bill & Melinda Gates Foundation, the Omidyar Network, and USAID.15 It focuses on designing regulatory infrastructure in two countries, the Philippines and Mexico. Emphasizing the need for consumers to access services through their cell phone, the project introduced AML reporting procedures and chatbots through which consumers could report complaints about digital financial products directly to regulators.

Importantly, regtech innovation in the developing world often exceeds that in the major advanced economies. One reason is that many emerging countries never built the complex regulatory infrastructure that is commonplace today in regions like the U.S., Canada, and Europe. This creates an opportunity to start with a clean slate, using todays best technology rather than layering new requirements on top of yesterdays systems.

Perhaps AIs greatest financial inclusion promise lies in the emergence of data-centered credit underwriting techniques that evaluate loan applications. Traditional credit underwriting has relied heavily on a narrow set of dataespecially the individuals income and credit history, as reported to the major Credit Reporting Agenciesbecause this information is easily available to lenders. Credit scores are accurate in predicting default risk among people with good FICO scores (and low risks of default). However, those traditional underwriting techniques skew toward excluding some people who could repay a loan but have a thin credit file (and hence a lower or no credit score) or a complicated financial situation that is harder to underwrite.

AI underwriting is beginning to be used by lenders, especially fintechs. AI is also increasingly being used by financial firms as a regtech tool to check that the main underwriting process complies with fair-lending requirements. A third process, much less developed, is the potential for the same technologies to be used by regulators to check for discrimination by lenders, including structural bias and unintentional exclusion of people who could actually repay a loan. Structural biases often lead to disparate impact outcomes. In these cases, regulators assert that a lending policy was discriminatory on the basis of race, gender, or other prohibited factors, not because of intent but because a specific class of consumers endured negative outcomes. Because disparate impact is a legal standard16 and violations of these laws create liability for lenders, these claims may also be made by plaintiffs representing people who argue they have been wronged.

Research conducted by FinRegLab and others is exploring the potential for AI-based underwriting to make credit decisions more inclusive with little or no loss of credit quality, and possibly even with gains in loan performance. At the same time, there is clearly risk that new technologies could exacerbate bias and unfair practices if not properly designed, which will be discussed below.

In March 2022, the Securities and Exchange Commission proposed rules for requiring public companies to disclose risks relating to climate change.17 The effectiveness of such a mandate will inevitably be limited by the fact that climate impacts are notoriously difficult to track and measure. The only feasible way to solve this will be by gathering more information and analyzing it with AI techniques that can combine vast sets of data about carbon emissions and metrics, interrelationships between business entities, and much more.

The potential benefits of AI are enormous, but so are the risks. If regulators mis-design their own AI tools, and/or if they allow industry to do so, these technologies will make the world worse rather than better. Some of the key challenges are:

Explainability: Regulators exist to fulfill mandates that they oversee risk and compliance in the financial sector. They cannot, will not, and should not hand their role over to machines without having certainty that the technology tools are doing it right. They will need methods either for making AIs decisions understandable to humans or for having complete confidence in the design of tech-based systems. These systems will need to be fully auditable.

Bias: There are very good reasons to fear that machines will increase rather than decrease bias. Technology is amoral. AI learns without the constraints of ethical or legal considerations, unless such constraints are programmed into it with great sophistication. In 2016, Microsoft introduced an AI-driven chatbot called Tay on social media. The company withdrew the initiative in less than 24 hours because interacting with Twitter users had turned the bot into a racist jerk. People sometimes point to the analogy of a self-driving vehicle. If its AI is designed to minimize the time elapsed to travel from point A to point B, the car or truck will go to its destination as fast as possible. However, it could also run traffic lights, travel the wrong way on one-way streets, and hit vehicles or mow down pedestrians without compunction. Therefore, it must be programmed to achieve its goal within the rules of the road.

In credit, there is a high likelihood that poorly designed AIs, with their massive search and learning power, could seize upon proxies for factors such as race and gender, even when those criteria are explicitly banned from consideration. There is also great concern that AIs will teach themselves to penalize applicants for factors that policymakers do not want considered. Some examples point to AIs calculating a loan applicants financial resilience using factors that exist because the applicant was subjected to bias in other aspects of her or his life. Such treatment can compound rather than reduce bias on the basis of race, gender, and other protected factors. Policymakers will need to decide what kinds of data or analytics are off-limits.

One solution to the bias problem may be use of adversarial AIs. With this concept, the firm or regulator would use one AI optimized for an underlying goal or functionsuch as combatting credit risk, fraud, or money launderingand would use another separate AI optimized to detect bias in the decisions in the first one. Humans could resolve the conflicts and might, over time, gain the knowledge and confidence to develop a tie-breaking AI.

Data quality: As noted earlier, AI and data management are inextricably intertwined, so that acceptable AI usage will not emerge unless regulators and others solve the many related challenges regarding data use. As with any kind of decision making, AI-based choices are only as good as the information on which they rely.

Integrating AI into regulation is a big challenge that brings substantial risks, but the cost of sticking with largely analog systems is greater.

Accordingly, regulators face tremendous challenges regarding how to receive and clean data. AI can deal most easily with structured data, which arrives in organized formats and fields that the algorithm easily recognizes and puts to use. With NLP tools, AI can also make sense of unstructured data. Being sure, however, that the AI is using accurate data and understanding it requires a great deal of work. Uses of AI in finance will require ironclad methods for ensuring that data is collected and cleaned properly before it undergoes algorithmic analysis. The old statistics maxim garbage in, garbage out becomes even more urgent when the statistical analysis will be done by machines using methods that its human minders cannot fully grasp.

It is critical that policymakers focus on what is at stake. AI that might be good at, say, recommending a movie to watch on Netflix will not suffice for deciding whether to approve someone for a mortgage or a small-business loan or let them open a bank account.

Data protection and privacy: Widespread use of AI will also necessitate deep policy work on the ethics and practicalities of using data. What kinds of information should be used and what should be off-limits? How will it be protected from security risks and government misuse? Should people have the right to force-remove past online data, and should companies encryption techniques be impenetrable even by the government?

Privacy-enhancing technologies may be able to mitigate these risks, but the dangers will require permanent vigilance. The challenge will spike even higher with the approach of quantum computing that has the power to break the encryption techniques used to keep data safe.

Model Risk Management (MRM): Mathematical models are already widely used in financial services and financial regulation. They raise challenges that will only grow as AI becomes more widely employed. This is particularly true as AI is placed in the hands of people who do not understand how it makes decisions. Regulators and industry alike will need clear governance protocols to ensure that these AI tools are frequently retested, built on sufficiently robust and accurate data, and are kept up to date in both their data and technical foundations.

Redesigning financial regulation to catch up to the acceleration of AI and other industry innovation is somewhat analogous to the shift in cameras from analog to digital at the turn of the millennium. An analog camera produces an image in a form that is cumbersome, requiring expert (and expensive) manipulation to edit photos. Improving the process of taking pictures with 35-millimeter film hits a ceiling at a certain point. By comparison, the digital or smartphone camera was a whole new paradigm, converting images into digital information that could be copied, printed, subjected to artificial intelligence for archiving and other methods, and incorporated into other media. The digital camera was not an evolution of the analog version that preceded it. It was entirely different technology.

Similarly, current regulatory technologies are built on top of an underlying system of information and processes that were all originally designed on paper. As a result, they are built around the constraining assumptions of the analog era, namely that information is scarce and expensive to obtain, and so is computing power.

To undertake a more dramatic shift to a digitally native design, regulators should create new taxonomies of their requirements (which some agencies are already developing) that can be mapped to AI-powered machines. They should also develop comprehensive education programs to train their personnel in technology knowledge and skills, including baseline training on core topics, of which AI is a single, integral part. Other key big data issues include the Internet of Things, cloud computing, open source code, blockchains and distributed ledger technology, cryptography, quantum computing, Application Program Interfaces (APIs), robotic process automation (RPI), privacy enhancing technologies (PETs), Software as a Service (Saas), agile workflow, and human-centered design.

These are big challenges that bring substantial risks, but the cost of sticking with largely analog systems is greater. Personnel may fear that such an overhaul could result in machines taking their jobs, or that machines will make catastrophic errors, resulting in financial mishaps. On the former fear, robotics and AI can in fact empower human beings to do their jobs better, by decreasing vast amounts of routine work duties and freeing up people to use their uniquely human skills on high-value objectives. On the second fear, agencies should build cultures grounded in an understanding that humans should not cede significant decisionmaking to machines. Rather, experts should use technology to help prioritize their own efforts and enhance their work.

Data is the new oil not only in its value but in its impact: Like oil, digitization of data can solve some problems and cause others. The key to achieving optimal outcomes is to use both data and AI in thoughtful wayscarefully designing new systems to prevent harm, while seizing on AIs ability to analyze volumes of information that would overwhelm traditional methods of analysis. A digitally robust regulatory system with AI at its core can equip regulators to solve real-world problems, while showcasing how technology can be used for good in the financial system and beyond.

The author serves on the board of directors of FinRegLab, a nonprofit organization whose research includes a focus on use of AI in financial regulatory matters. She did not receive financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. Other than the aforementioned, the author is not currently an officer, director, or board member of any organization with a financial or political interest in this article.

Read more from the original source:

The case for placing AI at the heart of digitally robust financial regulation - Brookings Institution

Posted in Ai | Comments Off on The case for placing AI at the heart of digitally robust financial regulation – Brookings Institution

Keeping Up with the EEOC: Artificial Intelligence Guidance and Enforcement Action – Gibson Dunn

Posted: at 3:51 am

May 23, 2022

Click for PDF

On May 12, 2022, more than six months after the Equal Employment Opportunity Commission (EEOC) announced its Initiative on Artificial Intelligence and Algorithmic Fairness,[1] the agency issued its first guidance regarding employers use of Artificial Intelligence (AI).[2]

The EEOCs guidance outlines best practices and key considerations that, in the EEOCs view, help ensure that employment tools do not disadvantage applicants or employees with disabilities in violation of the Americans with Disabilities Act (ADA). Notably, the guidance came just one week after the EEOC filed a complaint against a software company alleging intentional discrimination through applicant software under the Age Discrimination in Employment Act (ADEA), potentially signaling more AI and algorithmic-based enforcement actions to come.

The EEOCs AI Guidance

The EEOCs non-binding, technical guidance provides suggested guardrails for employers on the use of AI technologies in their hiring and workforce management systems.

Broad Scope. The EEOCs guidance encompasses a broad-range of technology that incorporates algorithmic decision-making, including automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software, and worker management software.[3] As an example of such software that has been frequently used by employers, the EEOC identifies testing software that provides algorithmically-generated personality-based job fit or cultural fit scores for applicants or employees.

Responsibility for Vendor Technology. Even if an outside vendor designs or administers the AI technology, the EEOCs guidance suggests that employers will be held responsible under the ADA if the use of the tool results in discrimination against individuals with disabilities. Specifically, the guidance states that employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employers behalf.[4] The guidance further states that an employer may also be liable if a vendor administering the tool on the employers behalf fails to provide a required accommodation.

Common Ways AI Might Violate the ADA. The EEOCs guidance outlines the following three ways in which an employers tools may, in the EEOCs view, be found to violate the ADA, although the list is non-exhaustive and intended to be illustrative:

Tips for Avoiding Pitfalls. In addition to illustrating the agencys view of how employers may run afoul of the ADA through their use of AI and algorithmic decision-making technology, the EEOCs guidance provides several practical tips for how employers may reduce the risk of liability. For example:

Enforcement Action

As previewed above, on May 5, 2022just one week before releasing its guidancethe EEOC filed a complaint in the Eastern District of New York alleging that iTutorGroup, Inc., a software company providing online English-language tutoring to adults and children in China, violated the ADEA.[11]

The complaint alleges that a class of plaintiffs were denied employment as tutors because of their age. Specifically, the EEOC asserts that the companys application software automatically denied hundreds of older, qualified applicants by soliciting applicant birthdates and automatically rejecting female applicants age 55 or older and male applicants age 60 or older. The complaint alleges that the charging party was rejected when she used her real birthdate because she was over the age of 55 but was offered an interview when she used a more recent date of birth with an otherwise identical application. The EEOC seeks a range of damages including back wages, liquidated damages, a permanent injunction enjoining the challenged hiring practice, and the implementation of policies, practices, and programs providing equal employment opportunities for individuals 40 years of age and older. iTutorGroup has not yet filed a response to the complaint.

Takeaways

Given the EEOCs enforcement action and recent guidance, employers should evaluate their current and contemplated AI tools for potential risk. In addition to consulting with vendors who design or administer these tools to understand the traits being measured and types of information gathered, employers might also consider reviewing their accommodations processes for both applicants and employees.

___________________________

[1] EEOC, EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness (Oct.28, 2021), available at https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic-fairness.

[2] EEOC, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022), available at https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence?utm_content=&utm_medium=email&utm_name=&utm_source=govdelivery&utm_term [hereinafter EEOC AI Guidance].

[3] Id.

[4] Id. at 3, 7.

[5] Id. at 11.

[6] Id. at 13.

[7] Id. at 14.

[8] For more information, please see Gibson Dunns Client Alert, New York City Enacts Law Restricting Use of Artificial Intelligence in Employment Decisions.

[9] EEOC AI Guidance at 14.

[10] Id.

[11] EEOC v. iTutorGroup, Inc., No. 1:22-cv-02565 (E.D.N.Y. May 5, 2022).

The following Gibson Dunn attorneys assisted in preparing this client update: Harris Mufson, Danielle Moss, Megan Cooney, and Emily Maxim Lamm.

Gibson Dunns lawyers are available to assist in addressing any questions you may have regarding these developments. To learn more about these issues, please contact the Gibson Dunn lawyer with whom you usually work, any member of the firmsLabor and Employmentpractice group, or the following:

Harris M. Mufson New York (+1 212-351-3805, hmufson@gibsondunn.com)

Danielle J. Moss New York (+1 212-351-6338, dmoss@gibsondunn.com)

Megan Cooney Orange County (+1 949-451-4087, mcooney@gibsondunn.com)

Jason C. Schwartz Co-Chair, Labor & Employment Group, Washington, D.C.(+1 202-955-8242, jschwartz@gibsondunn.com)

Katherine V.A. Smith Co-Chair, Labor & Employment Group, Los Angeles(+1 213-229-7107, ksmith@gibsondunn.com)

2022 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

Go here to see the original:

Keeping Up with the EEOC: Artificial Intelligence Guidance and Enforcement Action - Gibson Dunn

Posted in Ai | Comments Off on Keeping Up with the EEOC: Artificial Intelligence Guidance and Enforcement Action – Gibson Dunn

Page 39«..1020..38394041..5060..»