Page 85«..1020..84858687..90100..»

Category Archives: Ai

AI Weekly: Recognition of bias in AI continues to grow – VentureBeat

Posted: December 5, 2021 at 12:02 pm

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

This week, the Partnership on AI (PAI), a nonprofit committed to responsible AI use, released a paper addressing how technology particularly AI can accentuate various forms of biases. While most proposals to mitigate algorithmic discrimination require the collection of data on so-called sensitive attributes which usually include things like race, gender, sexuality, and nationality the coauthors of the PAI report argue that these efforts can actually cause harm to marginalized people and groups. Rather than trying to overcome historical patterns of discrimination and social inequity with more data and clever algorithms, they say, the value assumptions and trade-offs associated with the use of demographic data must be acknowledged.

Harmful biases have been found in algorithmic decision-making systems in contexts such as health care, hiring, criminal justice, and education, prompting increasing social concern regarding the impact these systems are having on the wellbeing and livelihood of individuals and groups across society, the coauthors of the report write. Many current algorithmic fairness techniques [propose] access to data on a sensitive attribute or protected category (such as race, gender, or sexuality) in order to make performance comparisons and standardizations across groups. [But] these demographic-based algorithmic fairness techniques [remove] broader questions of governance and politics from the equation.

The PAI papers publication comes as organizations take a broader and more critical view of AI technologies, in light of wrongful arrests,racist recidivism,sexist recruitment, and erroneous grades perpetuated by AI. Yesterday, AI ethicist Timnit Gebru, who was controversially ejected from Google over a study examining the impacts of large language models, launched the Distributed Artificial Intelligence Research (DAIR), which aims to ask question about responsible use of AI and recruit researchers from parts of the world rarely represented in the tech industry. Last week, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) approved a series of recommendations for AI ethics, including regular impact assessments and enforcement mechanisms to protect human rights. Meanwhile, New York Universitys AI Now Institute, the Algorithmic Justice League, and Data for Black Lives are studying the impacts and applications of AI algorithms, as are Khipu, Black in AI, Data Science Africa,Masakhane, andDeep Learning Indaba.

Legislators, too, are taking a harder look at AI systems and their potential to harm. The U.K.s Centre for Data Ethics and Innovation (CDEI) recently recommended that public sector organizations using algorithms be mandated to publish information about how the algorithms are being applied, including the level of human oversight. The European Union has proposed regulations that would ban the use of biometric identification systems in public and prohibit AI in social credit scoring across the blocs 27 member states. Even China, which is engaged in several widespread, AI-powered surveillance initiatives, has tightenedits oversight of the algorithms that companies use to drive their business.

PAIs work cautions that efforts to mitigate bias in AI algorithms will inevitably encounter roadblocks, however, due to the nature of algorithmic decision-making. If optimizing for a goal thats poorly defined, its likely that a system will reproduce historical inequity possibly under the guise of objectivity. Attempting to ignore societal differences across demographic groups will work to reinforce systems of oppression because demographic data coded in datasets has an enormous impact on the representation of marginalized peoples. But deciding how to classify demographic data is an ongoing challenge, as demographic categories continue to shift and change over time.

Collecting sensitive data consensually requires clear, specific, and limited use as well as strong security and protection following collection. Current consent practices are not meeting this standard, the PAI report coauthors wrote. Demographic data collection efforts can reinforce oppressive norms and the delegitimization of disenfranchised groups Attempts to be neutral or objective often have the effect of reinforcing the status quo.

At a time when relatively few major research papers consider the negative impacts of AI, leading ethicists are calling on practitioners to pinpoint biases early in the development process. For example, a program at Stanford the Ethics and Society Review (ESR) requires AI researchers to evaluate their grant proposals for any negative impacts. NeurIPS, one of the largest machine learning conferences in the world, mandates that coauthors who submit papers state the potential broader impact of their work on society. And in a whitepaper published by the U.S. National Institute of Standards and Technology (NIST), the coauthors advocate for cultural effective challenge, a practice that seeks to create an environment where developers can question steps in engineering to help identify problems.

Requiring AI practitioners to defend their techniques can incentivize new ways of thinking and help create change in approaches by organizations and industries, the NIST coauthors posit.

An AI tool is often developed for one purpose, but then it gets used in other very different contexts. Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended, NIST scientist Reva Schwartz, a coauthor of the NIST paper, wrote. All these factors can allow bias to go undetected [Because] we know that bias is prevalent throughout the AI lifecycle [not] knowing where [a] model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital step.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to theAI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Read more:

AI Weekly: Recognition of bias in AI continues to grow - VentureBeat

Posted in Ai | Comments Off on AI Weekly: Recognition of bias in AI continues to grow – VentureBeat

AI proves a dab hand at pure mathematics and protein hallucination – TechCrunch

Posted: at 12:02 pm

One of the reasons artificial intelligence is such an interesting field is that pretty much no one knows what it might turn out to be good at. Two papers by leading labs published in the journal Nature today show that machine learning can be applied to tasks as technically demanding as protein generation and as abstract as pure mathematics.

The protein thing may not sound like much of a surprise given the recent commotion around AIs facility in protein folding, as demonstrated by Googles DeepMind and the University of Washingtons Baker Lab, not coincidentally also the ones who put out the papers were noting today.

The study from the Baker Lab shows that the model they created to understand how protein sequences are folded can be repurposed to essentially do the opposite: create a new sequence meeting certain parameters and which acts as expected when tested in vitro.

This wasnt necessarily obvious you might have an AI thats great at detecting boats in pictures but cant draw one, for instance, or an AI that translates Polish to English but not vice versa. So the discovery that an AI built to interpret the structure of proteins can also make new ones is an important one.

There has already been some work done in this direction by various labs, such as ProGen over at SalesForce Research. But Baker Labs RoseTTAFold and DeepMinds AlphaFold are way out in front when it comes to accuracy in proteomic predictions, so its good to know the systems can turn their expertise to creative endeavors.

Meanwhile, DeepMind captured the cover of Nature with a paper showing that AI can aid mathematicians in complex and abstract tasks. The results wont turn the math world on its head, but they are truly novel and truly due to the help of a machine learning model, something that has never happened before.

The idea here relies on the fact that mathematics is largely the study of relationships and patterns as one thing increases, another decreases, say, or as the faces of a polyhedron increase, so too does the number of its vertices. Because these things happen according to systems, mathematicians can arrive at conjectures about the exact relationship between those things.

Some of these ideas are simple, like the trigonometry expressions we learned in grade school: Its a fundamental quality of triangles that the sum of their internal angles adds up to 180 degrees, or that the sum of the squares of the shorter sides is equal to the square of the hypotenuse. But what about for a 900-sided polyhedron in 8-dimensional space? Could you find the equivalent of a2 + b2 = c2 for that?

An example of the relationship between two complex qualities of knots: their geometry and algebraic signature. Image Credits: DeepMind

Mathematicians do, but there are limits to the amount of such work they can do, simply because one must evaluate many examples before one can be sure that a quality observed is universal and not coincidental. It is here, as a labor-saving method, that DeepMind deployed its AI model.

Computers have always been good at spewing out data at a scale that humans cant match but what is different [here] is the ability of AI to pick out patterns in the data that would have been impossible to detect on a human scale, explained Oxford professor of mathematics Marcus du Sautoy in the DeepMind news release.

Now, the actual accomplishments made with the help of this AI system are miles above my head, but the mathematicians among our readers will surely understand the following, quoted from DeepMind:

Defying progress for nearly 40 years, the combinatorial invariance conjecture states that a relationship should exist between certain directed graphs and polynomials. Using ML techniques, we were able to gain confidence that such a relationship does indeed exist and to hypothesize that it might be related to structures known as broken dihedral intervals and extremal reflections. With this knowledge, Professor Williamson was able to conjecture a surprising and beautiful algorithm that would solve the combinatorial invariance conjecture.

Algebra, geometry, and quantum theory all share unique perspectives on [knots] and a long standing mystery is how these different branches relate: for example, what does the geometry of the knot tell us about the algebra? We trained an ML model to discover such a pattern and surprisingly, this revealed that a particular algebraic quantity the signature was directly related to the geometry of the knot, which was not previously known or suggested by existing theory. By using attribution techniques from machine learning, we guided Professor Lackenby to discover a new quantity, which we call the natural slope, that hints at an important aspect of structure overlooked until now.

The conjectures were borne out with millions of examples another advantage of computation, that you can tell it to rigorously test your hypothesis without buying it pizza and coffee.

The DeepMind researchers and the professors mentioned above worked closely together to come up with these specific applications, so were not looking at a universal pure math helper or anything like that. But as Ruhr University Bochums Christian Stump notes in the Nature summary of the article, that it works at all is an important step toward such an idea.

Neither result is necessarily out of reach for researchers in these areas, but both provide genuine insights that had not previously been found by specialists. The advance is therefore more than the outline of an abstract framework, he wrote. Whether or not such an approach is widely applicable is yet to be determined, but Davies et al. provide a promising demonstration of how machine-learning tools can be used to support the creative process of mathematical research.

Read the original here:

AI proves a dab hand at pure mathematics and protein hallucination - TechCrunch

Posted in Ai | Comments Off on AI proves a dab hand at pure mathematics and protein hallucination – TechCrunch

Report: 95% of tech leaders say that AI will drive future innovation – VentureBeat

Posted: at 12:02 pm

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

According to a newly released survey from nonprofit technical organization IEEE, about one in five respondents say AI and machine learning (21%), cloud computing (20%), and 5G (17%) will be the most important technologies next year. The study examines the most important technologies in 2022, the industries expected to be most impacted by technology in the year ahead, and anticipated technology trends through the next decade.

What industries are expected to be most impacted by technology in the year ahead? Technology leaders surveyed cited manufacturing (25%), financial services (19%), health care (16%), and energy (13%) as industries poised for major disruption.

Regarding the key technology trends to expect through the next decade, an overwhelming majority (95%) agree including 66% who strongly agree that AI will drive the majority of innovation across nearly every industry sector in the next one to five years. Furthermore, 81% agree that, in the next five years, one-quarter of what they do will be enhanced by robots, and 77% agree that, in the same timeframe, robots will be deployed across their organization to enhance nearly every business function, from sales and human resources to marketing and IT. A majority of respondents agree (78%) that in the next 10 years, half or more of what they do will be enhanced by robots.

Above: Which technologies will be the most important in 2022? Among total respondents, top answers include AI and machine learning (21%), cloud computing (20%), and 5G (17%).

Image Credit: IEEE

IEEE surveyed 350 CIOs, CTOs, IT directors, and other technology leaders in the U.S., China, U.K., India, and Brazil at organizations with more than 1,000 employees across multiple industry sectors, including banking and financial services, consumer goods, education, electronics, engineering, energy, government, health care, insurance, retail, technology, and telecommunications. The surveys were conducted October 8 to 20, 2021.

Read the full report by IEEE.

Read the original post:

Report: 95% of tech leaders say that AI will drive future innovation - VentureBeat

Posted in Ai | Comments Off on Report: 95% of tech leaders say that AI will drive future innovation – VentureBeat

The movement to hold AI accountable gains more steam – Ars Technica

Posted: at 12:02 pm

MirageC | Getty Images

Algorithms play a growing role in our lives, even as their flaws are becoming more apparent: a Michigan man wrongly accused of fraud had to file for bankruptcy; automated screening tools disproportionately harm people of color who want to buy a home or rent an apartment; Black Facebook users were subjected to more abuse than white users. Other automated systems have improperly rated teachers, graded students, and flagged people with dark skin more often for cheating on tests.

Now, efforts are underway to better understand how AI works and hold users accountable. New Yorks City Council last month adopted a law requiring audits of algorithms used by employers in hiring or promotion. The law, the first of its kind in the nation, requires employers to bring in outsiders to assess whether an algorithm exhibits bias based on sex, race, or ethnicity. Employers also must tell job applicants who live in New York when artificial intelligence plays a role in deciding who gets hired or promoted.

In Washington, DC, members of Congress are drafting a bill that would require businesses to evaluate automated decision-making systems used in areas such as health care, housing, employment, or education, and report the findings to the Federal Trade Commission; three of the FTCs five members support stronger regulation of algorithms. An AI Bill of Rights proposed last month by the White House calls for disclosing when AI makes decisions that impact a persons civil rights, and it says AI systems should be carefully audited for accuracy and bias, among other things.

Elsewhere, European Union lawmakers are considering legislation requiring inspection of AI deemed high-risk and creating a public registry of high-risk systems. Countries including China, Canada, Germany, and the UK have also taken steps to regulate AI in recent years.

Julia Stoyanovich, an associate professor at New York University who served on the New York City Automated Decision Systems Task Force, says she and students recently examined a hiring tool and found it assigned people different personality scores based on the software program with which they created their rsum. Other studies have found that hiring algorithms favor applicants based on where they went to school, their accent, whether they wear glasses, or whether theres a bookshelf in the background.

Stoyanovich supports the disclosure requirement in the New York City law, but she says the auditing requirement is flawed because it only applies to discrimination based on gender or race. She says the algorithm that rated people based on the font in their rsum would pass muster under the law because it didnt discriminate on those grounds.

Some of these tools are truly nonsensical, she says. These are things we really should know as members of the public and just as people. All of us are going to apply for jobs at some point.

Some proponents of greater scrutiny favor mandatory audits of algorithms similar to the audits of companies' financials. Others prefer impact assessments akin to environmental impact reports. Both groups agree that the field desperately needs standards for how such reviews should be conducted and what they should include. Without standards, businesses could engage in ethics washing by arranging for favorable audits. Proponents say the reviews wont solve all problems associated with algorithms, but they would help hold the makers and users of AI legally accountable.

A forthcoming report by the Algorithmic Justice League (AJL), a private nonprofit, recommends requiring disclosure when an AI model is used and creating a public repository of incidents where AI caused harm. The repository could help auditors spot potential problems with algorithms and help regulators investigate or fine repeat offenders. AJL cofounder Joy Buolamwini coauthored an influential 2018 audit that found facial-recognition algorithms work best on white men and worst on women with dark skin.

More here:

The movement to hold AI accountable gains more steam - Ars Technica

Posted in Ai | Comments Off on The movement to hold AI accountable gains more steam – Ars Technica

How AI and ML can thwart a cybersecurity threat no one talks about – VentureBeat

Posted: at 12:02 pm

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

Ransomware attackers rely on USB drives to deliver malware, jumping the air gap that all industrial distribution, manufacturing, and utilities rely on as their first line of defense against cyberattacks. Seventy-nine percent of USB attacks can potentially disrupt the operational technologies (OT) that power industrial processing plants, according to Honeywells Industrial Cybersecurity USB Threat Report 2021.

The study finds the incidence of malware-based USB attacks is one of the fastest-growing and most undetectable threat vectors that process-based industries such as public utilities face today, as the Colonial Pipeline and JBS Foods illustrate. Utilities are also being targeted by ransomware attackers, as the thwarted ransomware attacks on water processing plants in Florida and Northern California aimed at contaminating water supplies illustrate. According to Check Point Software Technologies ThreatCloud database, U.S. utilities have been attacked 300 times every week with a 50% increase in just two months.

Ransomware attackers have accelerated their process of identifying the weakest targets and quickly capitalizing on them by exfiltrating data, then threatening to release it to the public unless the ransom is paid. Process manufacturing plants and utilities globally run on Industrial Control Systems (ICS) among the most porous and least secure enterprises systems. Because Industrial Control Systems (ICS) are easily compromised, they are a prime target for ransomware.

A third of ICS computers were attacked in the first half of 2021, according to Kasperskys ICS CERT Report. Kaspersky states that the number of ICS vulnerabilities reported in the first half of 2021 surged 41%, with most (71%) classified as high severity or critical. Attacks on the manufacturing industry increased nearly 300% in 2020 over the volume from the previous year, accounting for 22% of all attacks, according to the NTT 2021 Global Threat Intelligence Report (GTIR). The first half of 2021 was the biggest test of industrial cybersecurity in history. Sixty-three percent of all ICS-related vulnerabilities cause processing plants to lose control of operations, and 71% can obfuscate or block the view of operations immediately.

A SANS 2021 Survey: OT/ICS Cybersecurity finds that 59% of organizations greatest securing challenge is integrating legacy OT systems and technologies with modern IT systems. The gap is growing as modern IT systems become more cloud and API-based, making it more challenging to integrate with legacy OT technologies.

The SolarWinds attack showed how Advanced Persistent Threat (APT)-based breaches could modify legitimate executable files and have them propagate across software supply chains undetected. Thats the same goal ransomware attackers are trying to accomplish by using USB drives to deliver modified executable files throughout an ICS and infect the entire plant, so the victim has no choice but to pay the ransom.

USB-based threats rose from 19% of all ICS cyberattacks in 2019 to just over 37% in 2020, the second consecutive year of significant growth, according to Honeywells report.

Ransomware attackers prioritize USBs as the primary attack vector and delivery mechanism for processing manufacturing and Utilities targets. Over one in three malware attacks (37%) are purpose-built to be delivered using a USB device.

Its troubling how advanced ransomware code thats delivered via USB has become. Executable code is designed to impersonate legitimate executables while also having the capability to provide illegal remote access. Honeywell found that 51% can successfully establish remote access from a production facility to a remote location. Over half of breach attempts (52%) in 2020 were also wormable. Ransomware attackers are using SolarWinds as a model to penetrate deep into ICS systems and capture privileged access credentials, exfiltrate data, and, in some cases, establish command and control.

Honeywells data shows that process manufacturers and utilities face a major challenge staying at parity with ransomware attackers, APT, and state-sponsored cybercriminal organizations intent on taking control of an entire plant. The flex point of the balance of power is how USB-based ransomware attackers cross the air gaps in process manufacturing and utility companies. Utilities have relied on them for decades, and its a common design attribute in legacy ICS configurations. Infected USB drives used throughout a plant will cross air gaps without plant operators, sometimes knowing infected code is on the drives theyre using. Of the plants and utilities that successfully integrate OT and IT systems on a single platform, USB-delivered ransomware traverses these systems faster and leads to more devices, files, and ancillary systems being infected.

One of legacy ICS greatest weaknesses when it comes to cybersecurity is that they arent designed to be self-learning and werent designed to capture threat data. Instead, theyre real-time process and production monitoring systems that provide closed-loop visibility and control for manufacturing and process engineering.

Given their system limitations, its not surprising that 46% of known OT cyberthreats are poorly detected or not detected at all. In addition, Honeywell finds that 11% are never detected, and most detection engines and techniques catch just 35% of all attempted breach attempts.

Of the process manufacturers and utilities taking a zero-trust security-based approach to solving their security challenges, the most effective ones share several common characteristics. Theyre using AI and machine learning (ML) technologies to create and fine-tune continuously learning anomaly detection rules and analytics of events, so they can identify and respond to incidents and avert attacks. Theyre also using ML to identify a true incident from false alarms, creating more precise anomaly detection rules and analytics of events to respond to and mitigate incidents. AI and ML-based techniques are also powering contribution analytics that improves detection efficacy by prioritizing noise reduction over signal amplification. The goal is to reduce noise while improving signal detection through contextual data workflows.

Cybersecurity vendors with deep AI and ML expertise need to step up the pace of innovation and take on the challenge of identifying potential threats, then shutting them down. Improving detection efficacy by interpreting data patterns and insights is key. Honeywells study shows just how porous ICS systems are, and how the gap between legacy OT technologies and modern IT systems adds to the risks of a cyberattack. ICS systems are designed for process and production monitoring with closed-loop visibility and control. Thats why a zero trust-based approach that treats every endpoint, threat surface, and identity as the security perimeter needs to accelerate faster than ransomware attackers ability to impersonate legitimate files and launch ransomware attacks.

See original here:

How AI and ML can thwart a cybersecurity threat no one talks about - VentureBeat

Posted in Ai | Comments Off on How AI and ML can thwart a cybersecurity threat no one talks about – VentureBeat

The quest to make an AI that can play competitive Pokmon – The Verge

Posted: at 12:02 pm

An AI can beat a chess grandmaster. An AI can become the StarCraft esports champion. But creating an AI that could play Pokmon at the competitive level has been a more elusive problem.

Thanks to the variety of monsters, stats, moves, and items, a Pokmon battle has hundreds of thousands of factors for any player or machine to consider. But that hasnt stopped some people from trying. Most recently, Future Sight AI, created by computer scientist Albert III, successfully made it into the top 5 percent of the competitive ladder.

Albert posted a video explaining how it all works, but to summarize, the bot takes in all the information it can about the current state of the game, extrapolates the possibilities for all the turns it could take, looks a couple of turns ahead to how these would play out, and then chooses the option that can lead to the highest number of best outcomes. By doing all of that within 15 seconds, turn after turn, it can beat all but the very best human players.

Thats pretty impressive, especially when you consider that Albert had almost no experience with artificial intelligence or other major aspects of the program before he started working on it. I took classes in college about machine learning, [but] the real question is: was I paying attention? he laughs. The main software that it runs on is called Node.js. I hadnt touched that at all before I started this project.

Even though computer science is my day job, its something that I love so much that I cant help but do it in my free time, too, he says. That passion, combined with pandemic boredom, propelled him to look into an idea that was first inspired by his interest in basketball. [Some websites] would do this thing where youd be able to watch a game and see the teams current chance of winning, and I thought about doing that for Pokmon, he says. Then just kind of one thing led to another and then I ended up with an AI on my hands.

One thing leading to another is a pretty good summary of Alberts work on Future Sight AI. He says he wanted to learn new skills and simply broke them down into small enough tasks until he was able to create his vision. This is such a bad reference but theres that song in Frozen 2, called The Next Right [Thing]. Its just that. Just keep doing that until you get somewhere, he says. Nowadays, for example, he knows Node.js so well that he can use it in projects at his day job, too.

His step-by-step approach means that he actually wasnt aware of previous attempts to make similar AIs. Earlier projects are not as well documented as Alberts, though there have been a few varying success levels that gained some attention within the community.

An early example was Technical Machine, first created in 2010. Though it was updated through 2019, Technical Machine only ever fully supported Pokmon up to Generation 4 and did not create its own teams, one of the key features of Future Sight. Additionally, at the time of its release, the competitive ladder base was not established in the same way, so its difficult to tell how successful Technical Machine was overall. One Reddit comment, however, stated that Technical Machine at its smartest was still leagues worse than a normal player.

Another example was posted on Reddit in 2015 by a user who went by onmabd. Comments indicated that it was one of the stronger bots to date. The competitive ladder gives players a ranking of 1,000 to begin with, which then goes up or down depending on wins and losses. Theres no public way to view the data, and it changes over time, so its tricky to evaluate what a good rank is. However, during his creation process, Albert found that the average players ranking settles at around 1,170. Onmabds AI managed to reach 1,300, which would put it in the top 30 percent.

More recently, a user on Pokmon community forum Smogon going by pmariglia shared another attempt. Their AI beat Technical Machine in a best of three and was able to reach a rating of between 1,250 and 1,350 again, around the top 30 percent.

Future Sight AI ranked at 1,550 on average during testing. Though Albert apologized on Smogon for making it seem in my video that [Future Sight] is the first bot of its kind or the first to get as far as it did, (as well as detailing where the two projects take different approaches) he says that ultimately hes glad he didnt know that other people had already attempted his project. I dont know why I never thought to look into it [but] if Id gone down their path I might have ended up with the same results, he says.

He also was never expecting the video to gain as much attention as it did. For starters, when I ask about its creation, he laughs. I have to reveal something, he says. That entire video was animated in Powerpoint. I have to say I dont have much video production experience [so] I had an idea for what I wanted the video to look like and I just kind of kept working on it until I could get the tools that I knew how to use to do it.

Then, there was the delayed reaction. Posted in July, it was only viewed about 100 times in its first three weeks. The next week, it jumped up to 300,000. (As of late November, its almost at 600,000 views.) Albert thinks that it was picked up by somebody in the Pokmon community who posted it to Twitter, causing it to blow up, but he never found out who.

He says that it was difficult to process the sudden influx of viewers, but that he was appreciative of how supportive the Pokmon community was. I kind of just had to take a step back a bit because the whole point of what Im doing is that I want to teach people about computer science, he says.

In particular, as a Black man, Albert wants to be the kind of representation he never had in the field. I figured I have experience in public speaking, I like doing projects that people might find interesting, so really I wanted to put out a channel that said, This can be an example of someone like you doing fun things in computer science. Thats genuinely the core of why Im doing all of this.

For now, his focus is on getting Future Sight playable in actual Pokmon games. Thus far, it has used Pokmon Showdown, a community-created simulator that allows online battling and functionally forms the center of the competitive scene. But early on Albert was hinting that he wanted to make something that could tie in with the releases of Brilliant Diamond and Shining Pearl. Most recently, hes managed to get it to beat the final boss of Sword and Shield, despite not having any code to deal with Dynamaxing, which is banned in common competitive settings.

Beyond that, he doesnt have too many concrete goals. I mean this is such a corny thing, but I want it to be the very best like no one ever was, he says, echoing the old Pokmon anime theme tune. But seriously, I dont know. I just started this for fun and I want to take it as far as I still find joy out of making it.

See the rest here:

The quest to make an AI that can play competitive Pokmon - The Verge

Posted in Ai | Comments Off on The quest to make an AI that can play competitive Pokmon – The Verge

Sense raises $50M to bolster recruitment efforts with AI – VentureBeat

Posted: at 12:02 pm

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

Recruiting is a top concern for enterprises in 2021. In a survey by XpertHR, roughly one-half of responding employers plan to increase their workforce in 2021, but expect that hurdles will stand in the way. A high volume of low-quality applicants is stymying the search for the ideal candidates, with one source pegging the average number of unqualified applicants at 75%. Even among those that do make it through the recruiting funnel, a significant portion ultimately change their minds exacerbating the recruiting challenge.

Against this backdrop, Sense, an AI-driven talent engagement and communications platform, today announced that it raised $50 million in series D funding led by SoftBank. CEO Anil Dharni says that the proceeds, which bring Senses total capital to $90 million, will be put toward hiring and recruitment as well as product development.

San Francisco, California-based Sense was launched in 2016 by Dharni, Alex Rosen, Pankaj Jindal, and Ram Gudavalli. Dharni is an active entrepreneur, having cofounded AnswerU and social gaming networks Storm8 and Funzio prior to starting Sense. Jindal was previously the CEO of Akraya, an IT and marketing consulting agency headquartered in Bengaluru. As for Gudavalli, he cofounded Funzio with Dharni and worked alongside him at Hi5, a social network whose parent company was acquired by MeetMe in 2017 for $60 million.

In the past decade, weve seen a shift in the workforce dynamic. Candidates have more options and subsequently more power than ever before And yet, recruiters and hiring teams are still stuck in the dark ages when it comes to creating the ultimate candidate experience, Dharni told VentureBeat via email. I want to create a world where recruiters and talent acquisition leaders are loved by the candidates they serve. The bottom line is, companies are struggling in a massive way to retain and engage with employees.

Sense offers a slate of job candidate matching and screening services in addition to a drag-and-drop recruitment campaign creation tool. With the platforms services, which sync with Workday, Greenhouse, and other existing applicant tracking systems, companies can deduplicate pools of new applicants against existing databases and send targeted follow-up messages, among other tasks.

Sense also provides a shared inbox through which HR teams can manage and prioritize candidate conversations, sort and view chats, and perform searches by name, name, and tags. For companies offering referral programs, Sense can host a dashboard that shows open positions and allows employees to submit referrals and track where they are in the hiring process. Managers can use the dashboard on the backend to manage submissions, approvals, rewards, and payouts.

Sense says that its developed algorithms that can automatically trigger messages and workflows based on a candidates profile data or when they complete a certain action in a journey. The platforms candidate email and text messaging feature which can send messages to up to hundreds of candidates at once can optionally automate messages with personalized texts that answer questions, explain benefits, and provide status updates on hiring.

In September, Sense launched a chatbot that sources and screen candidates by responding to questions when recruiters are offline. A part of the companys larger platform, the chatbot can schedule interviews and support database reactivation through proactive candidate outreach.

Sense says that the chatbots design was informed by its own research, including a recent survey it conducted on strategies for recruiting success. The survey, which canvassed 600 leaders at staffing agencies, found that recruiters spend up to 50% of their time on manual and repetitive tasks andtake over 6 hours to respond to new leads, on average.

Theres no shortage of platforms claiming to leverage AI to expedite the hiring and recruitment process. For example, Xor is developing a chatbot that handles job candidate recruiting and screening processes, and Celential.ai which focuses specifically on the software industry employs models to match candidates with open roles.

Other Sense competitors include Wade & Wendy, Workey, and Phenom People, but Dharni believes theres plenty of business to go around in the $19.38 billion HR solutions market. Senses customers include teams at Amazon, Volt, PrideStaff, and Sears.

We have over 350 million candidate profiles across over 600 customers. [T]ens of thousands of recruiters use Sense every day to interact and converse with millions of candidates each month, Dharni said. The pandemic has accelerated investment in recruitment technology and the adoption of Sense. It has pushed companies to pulse [and] survey employees more frequently to proactively understand employee challenges and solve them. It has [also] powered competitive hiring and contingent hiring. As a result of the pandemic, outbound communication and engagement through our platform grew 500%.

According to Alexander Mann Solutions, 96% of senior HR professionals believe that AI has the potential to greatly enhance talent acquisition despite assertions by some advocates that these tools can perpetuate bias in hiring processes. A number of reports, including theHidden Workers: Untapped Talentstudy, have raised concerns about potential bias arising from the use of AI in hiring.

Via email, Gartner research VP Helen Poitevin told VentureBeat that theres inherently less risk of AI bias in top of the funnel recruitment activity like the kind that Sense orchestrates. Thats not to suggest there isnt risk Poitevin stressed the need for companies like Sense (and their customers) to pay attention to the data being used and the assumptions being built in to the candidate-to-job-matching algorithms. But human biases are more likely to come into play with these types of solutions, she asserted, like biases in the language used to describe positions or the tone in outreach.

Looking at Senses website, I see that they are being used to engage, message, manage referrals, and engage with candidates through a chatbot. This is akin to other solutions on the market more oriented to candidate relationship management and recruitment marketing, engaging with candidates, Poitevin said. The risk is not as high as when AI is being used to rank the fit of a candidate to a given job opportunity For example, a recruiter simply being interested in a profile is not considered a robust assumption in an algorithm determining what makes a good match between a candidate and a job. These types of data and assumptions built into algorithms are much more likely to lead to biased decision-making in the hiring process.

Sense has 185 employees, a number it expects will double by the end of 2022.

Read this article:

Sense raises $50M to bolster recruitment efforts with AI - VentureBeat

Posted in Ai | Comments Off on Sense raises $50M to bolster recruitment efforts with AI – VentureBeat

1 Dividend King Is Leveraging AI to Stay On Top – Motley Fool

Posted: at 12:01 pm

Dividend stocks can be a great way for retirees to generate meaningful income from their investments while maintaining a lower-risk portfolio.

Among this group of stocks is an elite class that has earned the designation as Dividend Kings -- or companies that have increased their annual dividend payouts for 50 straight years or more. Cincinnati Financial (NASDAQ:CINF) is an insurance company that has earned this designation and managed to increase its dividend payout annually for 61 years straight. Only eight publicly traded companies have a longer streak.

Image source: Getty Images.

What has allowed Cincinnati Financial to continue performing so well for so long? Well, part of the secret is the company's ability to adapt to changes in its business. Insurers are increasingly collecting more data and using artificial intelligence (AI) in making underwriting decisions. Cincinnati Financial faces competition from young insurers leveraging AI to disrupt the industry. To stay ahead of the competition (or at least keep pace with them), Cincinnati Financial has invested heavily in its own data and AI-powered business.

It's an investment that appears to be paying off.

Cincinnati Financial is an insurer that focuses mainly on underwriting property and casualty insurance policies for individuals and businesses. Roughly 62% of its premiums coming from commercial lines of insurance including property, and from worker's compensation insurance. Another 26% of its premiums come from personal lines of insurance for things like homeowners and auto insurance.

Cincinnati Financial has a history of growing profits at good margins and its also a cash-generating machine. From 2012 through 2020, the company has grown its cash flow from its property-casualty insurance at a steady 11.5% compound annual growth rate. This consistent cash generation provides Cincinnati Financial with more than enough cash to cover claims, pay out dividends, and pursue investment opportunities for added growth.

The insurance industry is ripe for disruption from newer start-ups leveraging AI to reimagine the underwriting and claims process. Several unicorns, like Root and Lemonade, are utilizing AI to underwrite policies in a fraction of the time and resolve insurance claims quicker. AI will play a key role for insurance companies over the next decade. According to researchers at McKinsey:

"The winners in AI-based insurance will be carriers that use new technologies to create innovative products, harness cognitive learning insights from new data sources, streamline processes and lower costs, and exceed customer expectations for individualization and dynamic adaptation."

While unicorns are using AI to disrupt the industry, companies like Cincinnati Financial look to AI to maintain and build on their decades of experience. The insurer has built up its data and analytics risk models to underwrite better policies and improve the claims process.

The company credits these models for its lower share of property and casualty losses compared to the industry over the past five years. CEO Alan Schnitzer also credits the model for helping the company avoid underwriting plans for areas of the coast when Hurricane Ida made landfall this August in Louisiana. The model also helped it manage risk in those states where the hurricane passed through by using flood risk scoring and location intelligence models to set premiums accordingly.

Cincinnati Financial's AI model was also used to handle claims quickly and efficiently. One aspect of its claims response for Hurricane Ida was AI-assisted claim damage detection. By using high-resolution aerial images to evaluate home damage, the company could remotely identify customers with exterior damage and pay losses -- before customers returned home, in many cases. By leveraging AI, Cincinnati Financial could provide a better customer experience while significantly reducing the time to process claims.

Image source: Getty Images.

According to Risk Management Solutions Inc., losses from Hurricane Ida are estimated to come in between $31 billion and $44 billion for insurance companies. Despite Hurricane Ida's outsize damage, Cincinnati Financial made out well during the quarter. In the third quarter, the company posted a combined ratio of 92.6%. Combined ratio is a key measure of profitability for insurance companies. This ratio measures losses paid to cover claims plus expenses, divided by total premiums earned. A ratio below 100% means a company is writing profitable policies. In comparison, a ratio above 100% indicates the company is losing money on policies.

By comparison, Progressive saw a combined ratio of 142.3% on its property line of coverage and a 100.4% combined ratio for the overall business.Meanwhile, Allstate saw its homeowners' combined ratio come in at 111%, while the overall combined ratio was 105.3%.

Image source: Getty Images.

Business-wise, Cincinnati Financial has done a stellar job against its peers. Over the past five years, the insurer has achieved combined ratios below the industry average. It has done so while growing premiums above the industry average in four of the past five years. In 2020, the insurer saw premiums increase 6.3% compared to the industry average of 2.5%.

Cincinnati Financial has a long history of profitability, which is part of why it has been able to increase its dividends for 61 years straight. The stock also yields investors over 2.1% annually, which is higher than the S&P 500 average of about 1.3%.

Leveraging AI to improve underwriting and speed up the claims process is a positive sign it can fend off newer entrants in the space -- and makes Cincinnati Financial a solid dividend stock for any retiree's portfolio.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

Read more here:

1 Dividend King Is Leveraging AI to Stay On Top - Motley Fool

Posted in Ai | Comments Off on 1 Dividend King Is Leveraging AI to Stay On Top – Motley Fool

Teslas Director of AI Shares A New Project That Tesla Is Hiring For – CleanTechnica

Posted: at 12:01 pm

Teslas Director of Artificial Intelligence (AI), Andrej Karpathy, shared a thread this week about a new project that his team is working on. He also shared some video footage from it. The thread, in a nutshell, is an invitation for those interested in helping Tesla solve this particular problem. They want you to apply for a job.

Karpathy noted that the videos were panoptic segmentation from the new project and were too raw to run in the car. So, instead, they are feeding it into auto labelers.

In AI, panoptic segmentation is the task of clustering parts of an image together that belong to the same object class. Another term for this is pixel-level classification. It partitions images or video frames into multiple segments or objects. An auto labeler simply labels raw, unlabeled data. Labeling helps an AI understand the implications of a pattern. PatternEx has an in-depth article about this term and uses the Pandora app as an example. As you hear a song and click the thumbs up, you are basically telling it that you like the song. This is labeling. Another example is a parent reading a book to a baby, tapping an image of a dog, and saying the word dog.

So, explaining Andrejs tweet in laymans terms, he is simply saying that the videos are from a new project where they are clustering parts of an image/video that belong to the same object class and then getting it labeled via AI. After that, the aim is to improve the system more and more so that the AI gets better and better at seeing the world in a complete, comprehensive, human (but actually superhuman) way.

Karpathy pointed out that it is still early for this task and that Tesla needs help perfecting these panoptic segmentation predictions and realizing the downstream impact.

Whos gonna help Tesla? Anyone who wants to can apply to join the team here.

Link:

Teslas Director of AI Shares A New Project That Tesla Is Hiring For - CleanTechnica

Posted in Ai | Comments Off on Teslas Director of AI Shares A New Project That Tesla Is Hiring For – CleanTechnica

Prisons transcribe private phone calls with inmates using speech-to-text AI – The Register

Posted: at 12:01 pm

In brief Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners phone calls.

In a sales pitch, LEOs CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

"(The) sheriff believes (the calls) will help him fend off pending liability via civil action from inmates and activists," Sexton said. Verus transcribes phone calls and finds certain keywords discussing issues like COVID-19 outbreaks or other complaints about jail conditions.

Prisoners, however, said the tool was used to catch crime. In one case, it allegedly found one inmate illegally collecting unemployment benefits. But privacy advocates arent impressed. "The ability to surveil and listen at scale in this rapid way it is incredibly scary and chilling," said Julie Mao, deputy director at Just Futures Law, an immigration legal group.

Codenamed ISM001-055, the drug was first formulated by Pharma.AI, a software platform described as drug discovery engine developed by startup Insilico Medicine, earlier this year in February.

The molecule was designed to treat idiopathic pulmonary fibrosis (IPF), a lung disease that causes scar tissue to build inside the organ making it difficult for people to breathe properly. After initial tests, ISM001-055 seemed so promising that Insilico Medicine decided to launch clinical trials.

Healthy volunteers were selected for the trial to test for any side effects. "We are very pleased to see Insilico Medicine's first antifibrotic drug candidate entering into the clinic, said Feng Ren, chief scientific officer of Insilico Medicine. We believe this is a significant milestone in the history of AI-powered drug discovery because to our knowledge the drug candidate is the first ever AI-discovered novel molecule based on an AI-discovered novel target.

There is no current cure for the IPF. If the AI-designed drug is capable of treating the disease itll show that the technology is capable of developing new drugs at lower costs than traditional methods.

Timnit Gebru, who was controversially fired from her position as co-lead Googles Ethical AI research team a year ago, has set up her own independent lab.

The Distributed AI Research Institute (DAIR) is focused on studying social harms of AI, and how they can be best mitigated. It has received a total of $3.7m in funding from philanthropic groups and non-profit orgs such as the MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundation and the Rockefeller Foundation, according to the Washington Post.

Gebru said she wanted to remain independent, and has chosen to stay away from big name investors to avoid being influenced by large tech firms. Lets say I antagonize a funder not these, but others, she said. Theres all of these Big Tech billionaires who also are in big philanthropy now. DAIRs research will scrutinize and critique these big firms.

Some early projects with the institutes first research fellow Raesetja Sefala will study analyze satellite images to study the effects of segregation in South Africa.

Here is the original post:

Prisons transcribe private phone calls with inmates using speech-to-text AI - The Register

Posted in Ai | Comments Off on Prisons transcribe private phone calls with inmates using speech-to-text AI – The Register

Page 85«..1020..84858687..90100..»