Page 100«..1020..99100101102..110120..»

Category Archives: Artificial Intelligence

How Artificial Intelligence Will Revolutionize the Way Video Games are Developed – insideBIGDATA

Posted: November 29, 2020 at 6:24 am

AI has been a part of the gaming industry for quite some time now. It has been featured in genres like strategy games, shooting games, and even racing games. The whole idea of using AI in gaming is to give the player a realistic experience while playing even on a virtual platform. However, with the recent advancements in AI, the gaming industry and game developers are coming up with more lucrative ways of using AI in games. This article will see how Artificial Intelligence is making a drastic change in the gaming industry.

What do Experts Have to Say About the Change?

Experts have done a lot of research to see where and how AI can take the gaming to a new level. As per the studies and market research, they say that you can expect the gaming industry to change drastically in the next few years.

Moreover, market researchers have seen a drastic change in the way people look at games. Now developers have a more significant challenge to stay connected to the extreme and fast-paced changes. Every year, the research to find out the trends, market value, key players, etc.

What do Market Studies and Research Reveal?

As of 2019, the market worth of the gaming industry was close to 150 Billion dollars. With the introduction of technologies like Artificial Intelligence, Augmented Reality and Virtual Reality, the numbers are set to cross more than 250 billion by 2021-2022.

Artificial Intelligence will be the stepping stone and equally important component in the evolution of the gaming industry. The key players to be at the top on this front include Tencent, Sony, EA, Google, Playtika, Nintendo, etc. Moreover, the market will also see the rise of new players that will specialize purely in developing games with advanced AI environments. Some of the main elements that would be included are:

A Look at How AI was Introduced in the Gaming Industry

The term Artificial Intelligence is broad and is not limited or restricted to just a particular industry. Even in the gaming sector, AI was introduced a long time back, although, at that time, no one knew that it would become so popular.

Ever since its inception, AI was introduced to the gaming industry. The 1951 game Nim is one such example of the earlier use of AI. Although at that time, artificial intelligence was not as advanced as it is now, it was still a game that was way ahead of its time.

Then in the 1970s, came the era of arcade gaming, even in this there were various AI elements in different games. Speed Racing, Pursuit, Quack, etc. were some of the most popular games. This was also the era when Artificial Intelligence gained popularity. In the 1980s, games like Pac-Man and Maze based games took things to a different level.

Using Artificial Intelligence in Game Development and Programing

If you are wondering- How does artificial intelligence make a difference in gaming?

The answer is simple; all the data is stored in an AI environment, each character uses this environment to transform accordingly. You can also create a virtual environment with the information that is stored. This information will include various scenarios, motives, actions, etc. making the characters more realistic and natural. So, how is Artificial Intelligence changing the gaming industry? Read on to find out.

With the help of AI, game developers are coming up with new techniques like reinforcement learning and pattern recognition. These techniques will help the game characters evolve through self-learning of their actions. A player will notice a vast difference when they play a game in the AI environment.

With AI, games will become more interesting. A player can dispose-off or slow down the game to suit their needs. You will hear characters, even talking, just like how humans do.The overall accessibility, intelligence, and visual appearance will make a significant impact on the player. Some live examples of these techniques are presently seen in games like The Sims and FEAR.

Over the past ten years, we have seen a drastic change in the gaming industry. The revolution of the sector has already started with the introduction of AI. Compared to the earlier methods of development, it is easy to develop games in an AI environment.Today, it is prevalent to find games with 3D effects and other such visualization techniques. AI is taking the gaming industry into a new era and heights. Very soon, it will not just be about good graphics, but also about interpreting and responding to the players actions.

Games like FIFA give you a real-world feel when you play them. The graphics make the game come to life. Now imagine having this experience taken a step higher with the help of AI. The experience will be at a different level.

Similarly, an action game will feel real with the help of artificial intelligence. In short, the players gaming experience will be very different from what it presently is. Moreover, the blend of AI and virtual reality will make a deadly combination.

Players do not feel that they are playing a game. Instead, they think that things are happening in real life. In todays times, game developers are paying attention to minor details. It is no longer about just the visual appearance or graphics.

Gaming developers have to develop their skills regularly. They are always adapting to new changes and techniques while developing games. This, in turn, will also help them to improve their creative skills and enhance their creativity.

With the help of Artificial Intelligence, developers can take their skills to a whole new level. They benefit from using cutting-edge technology to bring in unique aspects and ways to develop games.Even traditional game developers are using AI to bring in a difference in their games. They may not work with hi-tech environments; nevertheless, they develop games with various AI elements.

Today, the world is more in-tune with mobile games. The convenience of playing when you are on the go or waiting for a meeting to start makes it more in-demand. With the help of AI, mobile gamers will have a better experience when playing their favorite game.

Even the introduction of various tools on this front will contribute towards the overall experience of playing games on the mobile. Various changes happen automatically based on the interaction of the game.

When AI is used in a gaming environment, it brings in something new and different. The days of traditional gaming are gone. Now, game lovers want a lot more from their games instead of the norm.

Keeping this in mind, gaming developers are now coming up with programs and codes that deliver this. These codes and programs do not require any human interference. They create virtual worlds automatically. Many complex systems are designed to generate the results.

By doing so, this system results in amazing outcomes. One such example of a game on this front is Red Dead Redemption 2. When you look at this game, the players have the flexibility of interacting in myriad ways with non-playable characters. It also includes actions like bloodstains on a hat or wearing a hat.

In the gaming industry, a lot of time and money is invested while developing a game. Moreover, the strain of whether or not the game will be accepted is always on the air. Even before a game hits the market, it undergoes various checks until the developing team is sure that it is ready.

The entire process can take up to months or even years, depending on the kind of game. With the help of AI, the time taken to develop a game reduces drastically. This also saves a lot of resources that are needed to create the game.

Even the cost of labor reduces drastically. This means that gaming development companies can hire better and technically advanced game developers to get the job done. Considering that the demand for game developers is so high, the market gets competitive.

Players want their games to take them to new heights. The introduction of AI in the gaming industry has brought in this change. Gamers can experience a lot more with the games of today than what was developed earlier.

Moreover, the games are a lot more exciting and fun to play. AI has given players something to look out for. Gamers get the benefit of taking their game to a whole new level. It also takes games to a different dimension.

Furthermore, with an AI platform, gaming companies can create better playing environments. For instance, motion stimulation can come in handy to make each character have different movements. It can also help to develop further levels and maps, all without human interference.

The Benefits of Using Artificial Intelligence into the Gaming Industry

In the modern age, you will often find reviews about various games. When you look at the difference between the reviews of traditional games vs. those developed in an AI environment, you will find a lot of difference between the two.

A review on an AI-based game will tell you a lot about the game in detail. When developed in the right AI environment, you will find the reviews not revealing the real game. However, when it comes to a bad review, every mistake will be pointed out. This is why it becomes imperative for gaming developers to do an excellent job while developing a game. But what are the benefits of using AI for game development?

A Final Thought

The gaming industry is changing at a drastic pace. Moreover, the demand for new and improved games is increasing every day. Today, gamers do not want a traditional game to play on. They are looking for a lot more than just that.

AI has brought a change in the gaming industry ever since its inception. Over the years, we have seen drastic changes in the way games are developed. In todays technologically advanced world, games have become more challenging and exciting by providing human like experiences.

About the Author

Saurabh Hooda is co-founder of Hackr.io. He has worked globally for telecom and finance giants in various capacities. After working for a decade in Infosys and Sapient, he started his first startup, Lenro, to solve hyperlocal book-sharing problem. He is interested in product, marketing, and analytics.

Sign up for the free insideBIGDATAnewsletter.

Read the original post:

How Artificial Intelligence Will Revolutionize the Way Video Games are Developed - insideBIGDATA

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence Will Revolutionize the Way Video Games are Developed – insideBIGDATA

Pentagon Teams with Howard University to Steer Artificial Intelligence Center of Excellence – Nextgov

Posted: at 6:24 am

The Defense Department, Army and Howard University linked up to collectively push forward artificial intelligence and machine learning-rooted research, technologies and applications through a recently unveiled center of excellence.

Work it will underpin will shape the future, according to an announcement Monday from the Army Research Laboratoryand the $7.5 million center also marks a move by the Pentagon to help expand its pipeline for future personnel.

Diversity of science and diversity of the future [science and technology] talent base go hand-in-hand in this new and exciting partnership, Dr. Brian Sadler, Army senior research scientist for intelligent systems said. Tapped to manage the partnership, Sadler added that Howard University is an intellectual center for the nation.

Encompassing 13 schools and colleges, the institution is a private, historically Black research university that was founded in 1867. Fulbright recipients, Rhodes scholars and other notable experts were educated at Howard, which also produces more on-campus African-American Ph.D. recipients than any other in America, the release noted. In early 2020, the Armys Combat Capabilities Development Command previously partnered with the university, to support science, technology, engineering, and mathematics [or STEM] educational assistance and advancement among underrepresented groups.

Computer Science Prof. Danda Rawat, who also serves as director of Howards Data Science & Cybersecurity Center will lead the CoE, and the programs execution will be managed by the Army Research Laboratory, or ARL.

This center of excellence is a big win for the Army and [Defense Department] on many fronts, Sadler said. The research is directly aligned with Army priorities and will address pressing problems in both developing and applying AI tools and techniques in several key applications.

A kickoff meeting was set for mid-November, to jumpstart the research and work. ARLs release said the effort will explore vital civilian applications and multi-domain military operations spanning three specific areas of focus: key AI applications for Defense, technological foundation for trustworthy AI technologies, and infrastructure for AI research and development.

U.S. graduate students and early-career research faculty with expertise in STEM fields will gain fellowship and scholarship opportunities through the laboratory, and the government and academic partners also intend to collaborate on research and publications, mentoring, internships, workshops and seminars. Educational training and research exchange visits at both the lab and school will also be offered.

An ARL spokesperson told Nextgov Tuesday that officials involved expect to share program updates after the new year.

Link:

Pentagon Teams with Howard University to Steer Artificial Intelligence Center of Excellence - Nextgov

Posted in Artificial Intelligence | Comments Off on Pentagon Teams with Howard University to Steer Artificial Intelligence Center of Excellence – Nextgov

Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted – ScienceAlert

Posted: at 6:24 am

How might The Terminator have played out if Skynet had decided it probably wasn't responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they're untrustworthy.

These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of factors in balance with each other, spotting patterns in masses of data that humans don't have the capacity to analyse.

While Skynet might still be some way off, AI is already making decisions in fields that affect human lives like autonomous driving and medical diagnosis, and that means it's vital that they're as accurate as possible. To help towards this goal, this newly created neural network system can generate its confidence level as well as its predictions.

"We need the ability to not only have high-performance models, but also to understand when we cannot trust those models," says computer scientist Alexander Aminifrom the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

This self-awareness of trustworthiness has been given the name Deep Evidential Regression, and it bases its scoring on the quality of the available data it has to work with the more accurate and comprehensive the training data, the more likely it is that future predictions are going to work out.

The research team compares it to a self-driving car having different levels of certainty about whether to proceed through a junction or whether to wait, just in case, if the neural network is less confident in its predictions. The confidence rating even includes tips for getting the rating higher (by tweaking the network or the input data, for instance).

While similar safeguards have been built into neural networks before, what sets this one apart is the speed at which it works, without excessive computing demands it can be completed in one run through the network, rather than several, with a confidence level outputted at the same time as a decision.

"This idea is important and applicable broadly," says computer scientist Daniela Rus. "It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model."

The researchers tested their new system by getting it to judge depths in different parts of an image, much like a self-driving car might judge distance. The network compared well to existing setups, while also estimating its own uncertainty the times it was least certain were indeed the times it got the depths wrong.

As an added bonus, the network was able to flag up times when it encountered images outside of its usual remit (so very different to the data it had been trained on) which in a medical situation could mean getting a doctor to take a second look.

Even if a neural network is right 99 percent of the time, that missing 1 percent can have serious consequences, depending on the scenario. The researchers say they're confident that their new, streamlined trust test can help improve safety in real time, although the work has not yet been peer-reviewed.

"We're starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences," says Amini.

"Any user of the method, whether it's a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision."

The research is being presented at the NeurIPS conference in December, and anonline paperis available.

Read more:

Artificial Intelligence Is Now Smart Enough to Know When It Can't Be Trusted - ScienceAlert

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted – ScienceAlert

Organized Crime Has a New Tool in Its Belts – Artificial Intelligence – OCCRP

Posted: at 6:24 am

As new technologies offer a world of opportunities and benefits in many sectors, so too do they offer new avenues and for organized crime. It was true at the advent of the internet, and its true for the growing field of artificial intelligence and machine learning, according to a new joint reportby Europol and the United Nations Interregional Crime and Justice Research Center.

In the past social engineering scams had to be somewhat tailored to specific targets or audiences, through artificial intelligence they can be deployed en masse. (Source: Pixabay.com)At its simplest, artificial intelligences are human designed systems that, within a defined set of rules can absorb data, recognize patterns, and duplicate or alter them. In effect they are learning so that they can automate more and more complex tasks which in the past required human input.

However, the promise of more efficient automation and autonomy is inseparablefrom the different schemes that malicious actors are capable of, the document warned. Criminals and organized crime groups (OCGs) have been swiftly integrating new technologies into their modi operandi.

AI is particularly useful in the increasingly digitised world of organized crime that has unfolded due to the novel coronavirus pandemic.

AI-supported or AI-enhanced cyberattack techniques that have been studied are proof that criminals are already taking steps to broaden the use of AI, the report said.

One such example is procedurally generated fishing emails designed to bypass spam filters.

Despite the proliferation of new and powerful technologies, a cybercriminal's greatest asset is still his marks propensity for human error and the most common types of cyber scams are still based around so-called social engineering, i.e taking advantage of empathy, trust or naivete.

While in the past social engineering scams had to be somewhat tailored to specific targets or audiences, through artificial intelligence they can be deployed en masse and use machine learning to tailor themselves to new audiences.

Unfortunately, criminals already have enough experience and sample texts to build their operations on, the report said. An innovative scammer can introduce AI systems to automate and speed up the detection rate at which the victims fall in or out of the scam. This allows them to focus only on those potential victims who are easy to deceive. Whatever false pretense a scammer chooses to persuade the target to participate in, an ML algorithm would be able to anticipate a targets most common replies to the chosen pretense, the report explained.

Most terrifying of all however, is the concept of the so-called deepfakes. Through deepfakes, with little source material, machine learning can be used to generate incredibly realistic human faces or voices and impose them into any video.

The technology has been lauded as a powerful weapon in todays disinformation wars, whereby one can no longer rely on what one sees or hears. the report said. One side effect of the use of deepfakes for disinformation is the diminished trust of citizens in authorityand information media.

Flooded with increasingly AI-generated spam and fake news that build on bigoted text, fake videos, and a plethora of conspiracy theories, people might feel that a considerable amount of information, including videos, simply cannot be trusted. The result is a phenomenon termed as information apocalypse or reality apathy.

One of the most infamous uses of deepfake technology has been to superimpose the faces of unsuspecting women onto pornographic videos.

Read more:

Organized Crime Has a New Tool in Its Belts - Artificial Intelligence - OCCRP

Posted in Artificial Intelligence | Comments Off on Organized Crime Has a New Tool in Its Belts – Artificial Intelligence – OCCRP

Artificial Intelligence Usage on the Rise – Rockland County Times

Posted: at 6:24 am

Steven Kemler Says AI is increasingly effective and in demand

Machine learning and artificial intelligence (AI) have captured our imaginations for decades, but until more recently, had limited practical application. Steven Kemler, an entrepreneurial business leader and Managing Director of the Stone Arch Group, says that with recent increases in available data and computing power, AI already impacts our lives on many levels and that going forward, self-teaching algorithms will play an increasingly important role in bothin society and in business.

In 1997,Deep Blue, developed by IBM, became the first computer / artificial intelligence system to beat a current world chess champion (Gary Kasparov), significantly elevating interest in the practical applications of AI. These practical uses still took years to develop, with the worldwide market for AI technology not reaching $10 billion until 2016. Since then, AI market growth has accelerated significantly, reaching $50 billion in 2020 and expected to exceed $100 billion by 2024, according to the Wall Street Journal.

Kemler says AI and machine learning are playing a leading role in technological innovation across a wide spectrum of industries from healthcare and education, to transportation and the military. Many large corporations are using machine learning and AI to more accurately target customers based on their digital footprints, and in finance, AI is being widely used to power high speed trading systems and reduce fraud.

Intelligence agencies and the military are spending heavily on AI to analyze very large data sets and detect potential threats earlier than humans would normally be able to do so, including through the use of facial recognition. AI powered facial recognition is not only helpful for security purposes but can be used to identify lockdown and quarantine-avoiders and track the movements of individuals displaying symptoms. Despite privacy concerns, evidence suggests that the public is becoming more tolerant of these surveillance tactics and other uses of AI that would previously have been considered overly invasive.

Kemler points out that we can expect research and development in AI, and the machine learning field, to lead to continued breakthroughs in health sciences, including in the prevention and treatment of viruses. According an article recently published in the Lancet, a well-respected medical journal, [there is] a strong rationale for using AI-based assistive tools for drug repurposing medications for human disease, including during the COVID-19 pandemic. For more insights from Steven Kemler, visit his LinkedIn and Twitter platforms.

The rest is here:

Artificial Intelligence Usage on the Rise - Rockland County Times

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Usage on the Rise – Rockland County Times

How Artificial Intelligence Will Impact The Future Of Tech Jobs – Utah Public Radio

Posted: at 6:24 am

Artificial intelligence may seem like something out of a science fiction movie, but its used in everything from ride-sharing apps to personalized online shopping suggestions.

A common concern with artificial intelligence, or AI, is that it will take over jobs as more tasks become automated. Char Sample, a chief research scientist at the Idaho National Laboratory, believes this is likely, but instead of robots serving you lunch, AI may have more of an impact on cybersecurity and other white-collar jobs.

The people who are blue collar jobs that work in service industry, they're probably not going to be as impacted by AI, but the jobs that are more repetitive in nature, like students who are graduating with cybersecurity degrees, some of their early jobs are running scans and auditing systems, those jobs could be replaced. Sample said.

This may have a disproportional effect on jobs in tech hubs, like Salt Lake City. However, as AI becomes increasingly prevalent, AI-related jobs, and the cities where these jobs are sourced, are expected to grow.

If we want to expand beyond AIs current capabilities, Sample thinks researchers need to be ambitious and think outside the box.

Yeah, I firmly believe we need an AI moonshot initiative. And right now, I'm seeing a lot of piecemeal, even though some of the pieces of the piecemeal are very big, they lack that comprehensive overview that says, let's look at all aspects of artificial intelligence. Sample said.

Not only could a moonshot push AI forward, but it would bring in people with diverse backgrounds to improve AI.

I'm hoping that if we were able to do such a thing, as a moonshot, we could look at it across the whole spectrum of disciplines, and gain a new understanding of how this works, and we can use it to our advantage. Sample said.

Sample spoke about Artificial Intelligence at USUs Science Unwrapped program this fall. For information on how to watch her recorded presentation, visit http://www.usu.edu/unwrapped/presentations/2020/smart-cookies-october-2020.

Read more:

How Artificial Intelligence Will Impact The Future Of Tech Jobs - Utah Public Radio

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence Will Impact The Future Of Tech Jobs – Utah Public Radio

Meet GPT-3. It Has Learned to Code (and Blog and Argue). – The New York Times

Posted: at 6:24 am

Before asking GPT-3 to generate new text, you can focus it on particular patterns it may have learned during its training, priming the system for certain tasks. You can feed it descriptions of smartphone apps and the matching Figma code. Or you can show it reams of human dialogue. Then, when you start typing, it will complete the sequence in a more specific way. If you prime it with dialogue, for instance, it will start chatting with you.

It has this emergent quality, said Dario Amodei, vice president for research at OpenAI. It has some ability to recognize the pattern that you gave it and complete the story, give another example.

Previous language models worked in similar ways. But GPT-3 can do things that previous models could not, like write its own computer code. And, perhaps more important, you can prime it for specific tasks using just a few examples, as opposed to the thousands of examples and several hours of additional training required by its predecessors. Researchers call this few-shot learning, and they believe GPT-3 is the first real example of what could be a powerful phenomenon.

It exhibits a capability that no one thought possible, said Ilya Sutskever, OpenAIs chief scientist and a key figure in the rise of artificial intelligence technologies over the past decade. Any layperson can take this model and provide these examples in about five minutes and get useful behavior out of it.

This is both a blessing and a curse.

OpenAI plans to sell access to GPT-3 via the internet, turning it into a widely used commercial product, and this year it made the system available to a limited number of beta testers through their web browsers. Not long after, Jerome Pesenti, who leads the Facebook A.I. lab, called GPT-3 unsafe, pointing to sexist, racist and otherwise toxic language the system generated when asked to discuss women, Black people, Jews and the Holocaust.

With systems like GPT-3, the problem is endemic. Everyday language is inherently biased and often hateful, particularly on the internet. Because GPT-3 learns from such language, it, too, can show bias and hate. And because it learns from internet text that associates atheism with the words cool and correct and that pairs Islam with terrorism, GPT-3 does the same thing.

This may be one reason that OpenAI has shared GPT-3 with only a small number of testers. The lab has built filters that warn that toxic language might be coming, but they are merely Band-Aids placed over a problem that no one quite knows how to solve.

See the rest here:

Meet GPT-3. It Has Learned to Code (and Blog and Argue). - The New York Times

Posted in Artificial Intelligence | Comments Off on Meet GPT-3. It Has Learned to Code (and Blog and Argue). – The New York Times

Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems – Privacy – United States – Mondaq News Alerts

Posted: at 6:24 am

To print this article, all you need is to be registered or login on Mondaq.com.

Published in The Journal of Robotics, ArtificialIntelligence & Law (January-February 2021)

Many information security and privacy laws such as theCalifornia Consumer Privacy Act1 and the New York StopHacks and Improve Electronic Data Security Act2 requireperiodic assessments of an organization's informationmanagement systems. Because many organizations collect, use, andstore personal information from individualsmuch of whichcould be used to embarrass or impersonate those individuals ifinappropriately accessedthese laws require organizations toregularly test and improve the security they use to protect thatinformation.

As of yet, there is no similar specific law in the United Statesdirected at artificial intelligence systems ("AIS"),requiring the organizations that rely on AIS to test its accuracy,fairness, bias, discrimination, privacy, and security.

However, existing law is broad enough to impose on manyorganizations a general obligation to assess their AIS, andlegislation has appeared requiring certain entities to conductimpact assessments on their AIS. Even without a regulatory mandate,many organizations should perform AIS assessments as a bestpractice.

This column summarizes current and pending legal requirementsbefore providing more details about the assessment process.

The Federal Trade Commission's ("FTC") authorityto police "unfair or deceptive acts or practices in oraffecting commerce" through rule making and administrativeadjudication is broad enough to govern AIS, and it has a departmentthat focuses on algorithmic transparency, the Office of TechnologyResearch and Investigation.3 However, the FTC has notissued clear guidance regarding AIS uses that qualify as unfair ordeceptive acts or practices. There are general practices thatorganizations can adopt that will minimize their potential forengaging in unfair or deceptive practices, which include conductingassessments of their AIS.4 However, there is no specificFTC rule obligating organizations to assess their AIS.

There have been some legislative efforts to create such anobligation, including the Algorithmic AccountabilityAct,5 which was proposed in Congress, and a similar billproposed in New Jersey,6 both in 2019.

The federal bill would require covered entities to conduct"impact assessments" on their "high-risk" AISin order to evaluate the impacts of the AIS's design processand training data on "accuracy, fairness, bias,discrimination, privacy, and security."7

The New Jersey bill is similar, requiring an evaluation of theAIS's development process, including the design and trainingdata, for impacts on "accuracy, fairness, bias,discrimination, privacy, and security," and must includeseveral elements, including a "detailed description of thebest practices used to minimize the risks" and a"cost-benefit analysis."8 It would alsorequire covered entities to work with external third parties,independent auditors, and independent technology experts to conductthe assessments, if reasonably possible.9

Although neither of these has become law, they represent theexpected trend of emerging regulation.10

When organizations rely on AIS to make or inform decisions oractions that have legal or similarly significant effects onindividuals, it is reasonable for governments to require that thoseorganizations also conduct periodic assessments of the AIS. Forexample, state criminal justice systems have begun to adopt AISthat use algorithms to report on a defendant's risk to commitanother crime, risk to miss his or her next court date, etc.; humandecision makers then use those reports to inform theirdecisions.11

The idea is that the AIS can be a tool to inform decisionmakerspolice, prosecutors, judgesto help them makebetter, data-based decisions that eliminate biases they may haveagainst defendants based on race, gender, etc.12 This ispotentially a wonderful use for AIS, but only if the AIS actuallyremoves inappropriate and unlawful human bias rather than recreateit.

Unfortunately, the results have been mixed at best, as there isevidence suggesting that some of the AIS in the criminal justicesystem is merely replicating human bias.

In one example, an African-American teenage girl and a whiteadult male were each convicted of stealing property totaling about$80. An AIS determined that the white defendant was rated as alower recidivism risk than the teenager, even though he had a muchmore extensive criminal record, with felonies versus juvenilemisdemeanors. Two years after their arrests, the AISrecommendations were revealed to be incorrect: the male defendantwas serving an eight-year sentence for another robbery; theteenager had not committed any further crimes.13 Similarissues have been observed in AIS used in hiring,14lending,15 health care,16 and schooladmissions.17

Although some organizations are conducting AIS assessmentswithout a legal requirement, a larger segment is reluctant to adoptthe assessments as a best practice, as many for-profit companiescare more about accuracy to the original data used to train theirAIS than they do about eliminating the biases in that originaldata.18 According to Daniel Soukup, a data scientistwith Mostly AI, a start-up experimenting with controlling biases indata, "There's always another priority, it seems. . . .You're trading off revenue against making fair predictions, andI think that is a very hard sell for these institutions and theseorganizations."19

I suspect, though, that the tide will turn in the otherdirection in the near future, with or without a direct legislativeimpetus, similar to the trend in privacy rights and operations.Although most companies in the United States are not subject tobroad privacy laws like the California Consumer Privacy Act or theEuropean Union's General Data Protection Regulation, I haveobserved an increasing number of clients that want to provide theprivacy rights afforded by those laws, either because theircustomers expect them to or they want to position themselves ascompanies that care about individuals' privacy.

It is not hard to see a similar trend developing among companiesthat rely on AIS. As consumers become more aware of the problematicissues involved in AIS decision-makingpotential bias, use ofsensitive personal information, security of that information, thesignificant effects, lack of oversight, etc.they will becomejust as demanding about AIS requirements as privacy requirements.Similar to privacy, consumer expectations will likely be pushed inthat direction by jurisdictions that adopt AIS assessmentlegislation, even if they do not live in those jurisdictions.

Organizations that are looking to perform AIS assessments now inanticipation of regulatory activity and consumer expectationsshould conduct an assessment consistent with the followingprinciples and goals:

Consistent with the New Jersey Algorithmic Accountability Act,any AIS assessment should be done by an outside party, preferablyby qualified AI counsel, who can retain a technological consultantto assist them. This performs two functions.

First, it will avoid the situation in which the developers thatcreated the AIS for the organization are also assessing it, whichcould result in a conflict of interest, as the developers have anincentive to assess the AIS in a way that is favorable to theirwork.

Second, by retaining outside AI counsel, in addition tobenefiting from the counsel's expertise, organizations are ableto claim that the resulting assessment report and any related workproduct is protected by attorney-client privilege in the event thatthere is litigation or a government investigation related to theAIS. Companies that experience or anticipate a data security breachor event retain outside information security counsel for similarreasons, as the resulting breach analysis could be discoverable ifoutside counsel is not properly retained. The results can be veryexpensive if the breach report is mishandled.

For example, Capital One recently entered into an $80 millionConsent Order with the Department of Treasury related to a dataincident following an order from a federal court that a breachreport prepared for Capital One was not properly coordinatedthrough outside counsel and therefore not protected byattorney-client privilege.20

An AIS assessment should identify, catalogue, and describe therisks of an organization's AIS.

Properly identifying these risks, among others, and describinghow the AIS impacts each will allow an organization to understandthe issues it must address to improve its AIS.21

Once the risks in the AIS are identified, the assessment shouldfocus on how the organization alerts impacted populations. This canbe in the form of a public-facing AI policy, posted and maintainedin a manner similar to an organization's privacypolicy.22 This can also be in the form of more pointedpop-up prompts, a written disclosure and consent form, automatedverbal statement in telephone interactions, etc. The appropriateform of the notice will depend on a number of factors, includingthe organization, the AIS, the at-risk populations, the nature ofthe risks involved, etc. The notice should include the relevantrights regarding AIS afforded by privacy laws and otherregulations.

After implementing appropriate notices, the organization shouldanticipate receiving comments from members of the impactedpopulations and the general public. The assessment should help theorganization implement a process that allows it to accept, respondto, and act on those comments. This may be similar to howorganizations process privacy rights requests from consumers anddata subjects, particularly when a notice addresses those rights.The assessment may recommend that certain employees be tasked withaccepting and responding to comments, the organization addoperative capabilities that address privacy rights impacting AIS orrisks identified in the assessment and objected to by comments,etc. It may be helpful to have a technological consult provideinput on how the organization can leverage its technology to assistin this process.

The assessment should help the organization remediate identifiedrisks. The nature of the remediation will depend on the nature ofthe risks, the AIS, and the organization. Any outside AIS counselconducting the assessment needs to be well-versed in the variousforms remediation can take. In some instances, properly noticingthe risk to the relevant individuals will be sufficient, per bothlegal requirements and the organization's principles. Otherrisks cannot or should not be "papered over," but ratherobligate the organization to reduce the AIS's potential toinjure.23 This may include adding more human oversight,at least temporarily, to check the AIS's output fordiscriminatory activity or bias. A technology consultant may beable to advise the organization regarding revising the code orprocedures of the AIS to address the identified risks.

Additionally, where the AIS is evidencing bias because of thedata used to train it, more appropriate historical data or evensynthetic data may be used to retrain the AIS to remove or reduceits discriminatory behavior.24

All organizations that rely on AIS to make decisions that havelegal or similarly significant effects on individuals shouldperiodically conduct assessments of their AIS. This is true for allorganizations: for-profit companies, non-profit corporations,governmental entities, educational institutions, etc. Doing so willhelp them avoid potential legal trouble in the event their AIS isinadvertently demonstrating illegal behavior and ensure the AISacts consistently with the organization's values.

Organizations that adopt assessments earlier rather than laterwill be in a better position to comply with AIS-specific regulationwhen it appears and to develop a brand as an organization thatcares about fairness.

Footnotes

* John Frank Weaver, a member of McLaneMiddleton's privacy and data security practice group, is amember of the Board of Editors of The Journal of Robotics,Artificial Intelligence & Law and writes its"Everything Is Not Terminator" column. Mr.Weaver has a diverse technology practice that focuses oninformation security, data privacy, and emerging technologies,including artificial intelligence, self-driving vehicles, anddrones.

1. Cal. Civ. Code 1798.150(granting private right of action when a business fails to"maintain reasonable security procedures and practicesappropriate to the nature of the information," withassessments necessary to identify reasonable procedures).

2. New York General Business Law, Chapter20, Article 39-F, 899-bb.2(b)(ii)(A)(3) (requiringentities to assess "the sufficiency of safeguards in place tocontrol the identified risks"), 899.2(b)(ii)(B)(1) (requiringentities to assess "risks in network and softwaredesign"), 899.2(b)(ii)(B)(2)(requiring entities to assess"risks in information processing, transmission andstorage"), and 899.2(b)(ii)(C)(1) (requiring entities toassess "risks of information storage and disposal").

3. 15 U.S.C. 45(b); 15 U.S.C. 57a.

4. John Frank Weaver, "Everything IsNot Terminator: Helping AI to Comply with the FederalTrade Commission Act," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 4; July-August 2019),291-299 (other practices include: establishing a governingstructure for the AIS; establishing policies to address the useand/sale of AIS; establishing notice procedures; and ensuringthird-party agreements properly allocate liability andresponsibility).

5. Algorithmic Transparency Act of 2019,S. 1108, H.R. 2231, 116th Cong. (2019).

6. New Jersey Algorithmic AccountabilityAct, A.B. 5430, 218th Leg., 2019 Reg. Sess. (N.J. 2019).

7. Algorithmic Accountability Act of2019, supra note 5, at 2(2) and 3(b).

8. New Jersey Algorithmic AccountabilityAct, supra note 6, at 2.

9. Id., at 3.

10. For a fuller discussion of thesebills and other emerging legislation intended to govern AIS, seeYoon Chae, "U.S. AI Regulation Guide: Legislative Overview andPractical Considerations," The Journal of ArtificialIntelligence & Law (Vol. 3, No. 1; January-February 2020),17-40.

11. See Jason Tashea,"Courts Are Using AI to Sentence Criminals. That Must StopNow," Wired (April 17, 2017), https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/.

12. Julia Angwin, Jeff Larson, SuryaMattu, & Lauren Kirchner, "Machine Bias,"ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing("The appeal of the [AIS's] risk scores is obvious. . . Ifcomputers could accurately predict which defendants were likely tocommit new crimes the criminal justice system could be fairer andmore selective about who is incarcerated and for howlong.").

13. Id.

14. Jeffrey Dastin, "Amazon scrapssecret AI recruiting tool that showed bias against women,"Reuters (October 9, 2018), https://uk.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUKKCN1MK08G(Amazon "realized its new system was not rating candidates forsoftware developer jobs and other technical posts in agender-neutral way").

15. Dan Ennis and Tim Cook, "Bankingfrom AI lending models raises questions of culpability,regulation," Banking Dive (August 16, 2019), https://www.bankingdive.com/news/artificial-intelligence-lending-bias-model-regulation-liability/561085/#:~:text=Bill%20Foster%2C%20D%2DIL%2C,lenders%20for%20mortgage%20refinancing%20loans("African-Americans may find themselves the subject ofhigher-interest credit cards simply because a computer has inferredtheir race").

16. Shraddha Chakradhar, "Widelyused algorithm for follow-up care in hospitals is racially biased,study finds," STAT (October 24, 2019), https://www.statnews.com/2019/10/24/widely-used-algorithm-hospitals-racial-bias/("An algorithm commonly used by hospitals and other healthsystems to predict which patients are most likely to need follow-upcare classified white patients overall as being more ill than blackpatientseven when they were just as sick").

17. DJ Pangburn, "Schools are usingsoftware to help pick who gets in. What could go wrong?"Fast Company (May 17, 2019), https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong("If future admissions decisions are based on past decisiondata, Richardson warns of creating an unintended feedback loop,limiting a school's demographic makeup, harming disadvantagedstudents, and putting a school out of sync with changingdemographics.").

18. Todd Feathers, "Fake Data CouldHelp Solve Machine Learning's Bias ProblemIf We LetIt," Slate (September 17, 2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html.

19. Id.

20. In the Matter of Capital One, N.A.,Capital One Bank (USA), N.A., Consent Order (Document #2020-036),Department of Treasury, Office of the Comptroller of the Currency,AA-EC-20-51 (August 5, 2020), https://www.occ.gov/static/enforcement-actions/ea2020-036.pdf;In re: Capital One Consumer Data Security BreachLitigation, MDL No. 1:19md2915 (AJT/JFA) (E.D. Va. May 26,2020).

21. For a great discussion of identifyingrisks in AIS, see Nicol Turner Lee, Paul Resnick, and Genie Barton,"Algorithmic bias detection and mitigation: Best practices andpolicies to reduce consumer harms," Brookings (May22, 2019), https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

22. For more discussion of public facingAI policies, see John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart I," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 1; January-February 2019),59-65; John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart II," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 2; March-April 2019),141-146.

23. For a broad overview of remediatingAIS, see James Manyika, Jake Silberg, and Brittany Presten,"What Do We Do About Biases in AI?" Harvard BusinessReview (October 25, 2019), https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

24. There are numerous popular andacademic articles exploring this idea, including Todd Feathers,"Fake Data Could Help Solve Machine Learning's BiasProblemIf We Let It," Slate (September 17,2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html,and Lokke Moerel, "Algorithms can reduce discrimination, butonly with proper data," IAPP (November, 16, 2018), https://iapp.org/news/a/algorithms-can-reduce-discrimination-but-only-with-proper-data/.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

More here:

Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems - Privacy - United States - Mondaq News Alerts

Posted in Artificial Intelligence | Comments Off on Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems – Privacy – United States – Mondaq News Alerts

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks – Quality Magazine

Posted: at 6:24 am

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks | 2020-11-27 | Quality Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Excerpt from:

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks - Quality Magazine

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks – Quality Magazine

Sheremetyevo Shows How It Uses Artificial Intelligence to Effectively Plan and Execute Airport Functions and Activities – PRNewswire

Posted: at 6:24 am

MOSCOW, Nov. 25, 2020 /PRNewswire/ -- Sergei Konyakhin, Director of the Production Modeling Department of JSC Sheremetyevo International Airport, gave a presentation at the Artificial Intelligence Systems 2020 on November 24 conference showing how Sheremetyevo International Airport uses artificial intelligence (AI) systems to effectively manage the airport.

The conference was part of the online forum TAdviser Summit 2020: Results of the Year and Plans for 2021. The discussion among of top managers of large companies and leading experts in the IT industry centered on issues related to the implementation of artificial intelligence technologies in the activities of Russian enterprises.

Sheremetyevo Airport has developed and implemented systems for automatic long-term and short-term planning of personnel and resources. As a result, the planning system was calibrated based on real processes and its previous weaknesses were eliminated; recommendation systems were implemented allowing dispatchers to manage resources taking into account future events; and the company was able to significantly optimize expenses.

The company is looking at developing AI systems in the near future for automatic dispatching, automation of administrative personnel functions, and providing top management with transparent reporting and detailed factor analysis.

In the long term, the use of artificial intelligence systems will help maintain high quality services for passengers, airlines and punctuality of flights while taking into account the long-term growth of passenger and cargo traffic.

Sheremetyevo is the largest airport in Russia and has the largest terminal and airfield infrastructure in the country, including six passenger terminals with a total area of more than 570,000 square meters, three runways, a cargo terminal with a capacity of 380,000 tonnes of cargo annually, and other facilities. The uninterrupted operation of all Sheremetyevo systems requires precise planning, scheduling of all processes, and efficient allocation of resources. At the same time, forecasting the production activities of the airport need to take into account a number of specific factors, including:

Sheremetyevo International Airportis among the TOP-10 airport hubs in Europe, the largest Russian airport in terms of passenger and cargo traffic. The route network comprises more than 230 destinations. In 2019, the airport served 49 million 933 thousand passengers, which is 8.9% more than in 2018. Sheremetyevo is the best airport in terms of the quality of services in Europe, the absolute world leader in punctuality of flights, the owner of the highest 5-star Skytrax rating.

You can find additional information at http://www.svo.aero

TAdviser.ru is the largest business portal in Russia on corporate informatization, a leading organizer of events in this area, a resource on which a unique knowledge base is formed in three areas:

TAdviser.ru provides convenient mechanisms for finding the right IT solution and IT supplier based on information about implementations and the experience of companies. The site's audience exceeds 1 million people. The target audience of the portal are representatives of customer companies interested in obtaining complete and objective information from an independent source, companies that provide IT solutions, as well as persons observing the development of the IT market in Russia (investors, officials, the media, the expert community, etc.).

SOURCE Sheremetyevo International Airport

Read the rest here:

Sheremetyevo Shows How It Uses Artificial Intelligence to Effectively Plan and Execute Airport Functions and Activities - PRNewswire

Posted in Artificial Intelligence | Comments Off on Sheremetyevo Shows How It Uses Artificial Intelligence to Effectively Plan and Execute Airport Functions and Activities – PRNewswire

Page 100«..1020..99100101102..110120..»