Page 116«..1020..115116117118..130140..»

Category Archives: Ai

AI and RPA powered testing: innovate with confidence in workforce management – ITProPortal

Posted: August 14, 2021 at 1:30 am

When it comes to workforce management, AI is a tool already routinely deployed across all types of companies, in all sorts of industries as organizations look to increase efficiency and agility in their operations while simultaneously freeing staff from monotonous, low-value work, enabling them to focus on customer service and revenue-generating activities.

Be it powerful algorithms that claim to sift easily through CVs to enable recruiting teams to focus on the best-fit candidates, automated scheduling rotas working across multiple role types, departments and even store locations; or even forecasting tools that claim to predict required staffing levels based on footfall and customer buying patterns, AI-powered tools and technologies now boast a dizzying array of capabilities.

For busy organizations looking to improve efficiencies, reduce costs and boost employee engagement it isnt hard therefore to spot the appeal of artificial intelligence in the retail workforce management space. These cutting-edge innovations promise sweeping solutions too good to overlook after all. One global survey of 34,000 workers found two-thirds (64 percent) reported reduced stress levels and a more manageable workload thanks to the introduction of AI, for instance.

But tempting as it might be, its critical that organizations dont simply leap in blind as poorly implemented solutions can be incredibly disruptive.

Increasingly, these developments in workforce management tools mean that were handing over responsibility for a complex and business-critical system to yet another system. In few sectors is that riskier either than in retail, where workforces combine multiple functions, schedules and demands. But that also makes it more important than ever that in retail companies test the emergent behaviors of software theyre looking to rely upon. Or, in short test these tools to ensure they deliver true business value, rather than simply create disruption.

For many years, of course, the challenge has been reliance on the old ways of testing. Too often organizations are reliant on manual testers, forcing a compromise between quality, cost and time. Test quickly and cheaply and you risk low quality. If you invest in high quality at a low cost it can eat into hours and hours of time. Not to mention that the huge reliance placed on people power.

In fact, the drawbacks of this way of testing are almost too many to list.

But, they include:

Having such an important piece of work suck up so much manual labor, without even the guarantee of an accurate end result, has left many retailers crying out for a solution for years.

Thankfully, as much as AI has created the challenge, it has also presented the solution. In particular, by using automated software testing instead, its possible to drastically reduce the time and effort involved, and bump up the quality significantly at the same time.

In todays fast-paced software testing industry, the need to test and deliver applications quicker and better without compromising on quality is critical to the success of any organization. Companies need to get their products and software updates out quickly but still with excellent quality. An automation first approach enables them to reduce testing timescales and therefore time to market while maintaining resource availability and reducing costs for customers.

Its why in many industries automation testing is already the default option. But in retail, the complexity of technology and systems, and the people-centric nature of workforce management solutions, has meant its adoption has been far slower.

That could be about to change though. Its possible to leverage the same new developments that are driving the revolution in workforce management, to create automated testing that reflects the particular demands of retail. That means using RPA processes that allow us to reduce the time it takes for each test by a factor of 400-500x. Whilst at the same time, massively increasing repeatability.

This approach makes it possible to execute around 35 tests of an automated workforce schedule, with full end-to-end validation (including shifts, punches, holidays and timetables), in just a single minute. With a manual scenario tester, the same process would likely have taken them days. In fact, using this same approach it would be possible to automate a set of scripts involving around 1,000 test scenarios just overnight. Manually, it would take six weeks.

This isnt just about the speed though. This approach also delivers unwavering accuracy, deploying underlying technology and the ability to test across a wide selection of workforce management activities. It allows developers, business users and stakeholders to identify and resolve defects faster and earlier in the software SDLC process and make decisions based on real-time data.

For any organization looking to quickly and accurately validate the effectiveness of configuration changes or new solution deployment this shortening of the process is invaluable as retailers need to get products and software updates out quickly but still with excellent quality and unparalleled levels of transparency.. An automation first approach enables the reduction in testing timescales, and therefore time to deployment, while maintaining resources available. Essentially, test strategy planning, comprehensive documentation and extensive automation testing minimizes delays, reduces costs and gets issues resolved sooner.

The reality is as workforce management software gets more innovative, automated and data-driven, so too does the testing of these solutions need to step up a gear, delivering equivalent speed, accuracy and rigor. Until now the inherent complexity of a retail workforce has left this hard to achieve. Not only is it now possible, but this new way of testing is more efficient, more cost-effective and of higher quality. All this leaves organizations free to enjoy the fruits of AI, without coming across any bad apples.

Antony Kaplan, Test Services Director, REPL Group

Original post:

AI and RPA powered testing: innovate with confidence in workforce management - ITProPortal

Posted in Ai | Comments Off on AI and RPA powered testing: innovate with confidence in workforce management – ITProPortal

Four Things to Consider on the Future of AI-enabled Deterrence – Lawfare

Posted: July 25, 2021 at 3:26 pm

Editors Note: The impact of artificial intelligence on security policy has many dimensions, and one of the most important will be how it shapes deterrence. Artificial intelligence complexifies many of the components of successful deterrence, such as communicating a threat clearly and being prepared for adversary adaptation. Alex Wilner, Casey Babb and Jessica Davis of Carleton University unpack the relationship between artificial intelligence and deterrence, explaining some of the likely challenges and offering suggestions for how to improve deterrence.

Daniel Byman

***

Analysts and policymakers alike believe artificial intelligence (AI) may fundamentally reshape security. It is now vital to understand its implications for deterrence and coercion.Over the past three years, with funding from Canadas Department of National Defence through its Innovation for Defence Excellence and Security (IDEaS) program, we undertook extensive research to better understand how AI intersects with traditional deterrence theory and practice in both the physical and digital domains. After dozens of interviews and consultations with subject matter experts in the United States, Canada, the United Kingdom, Europe, Israel and elsewhere, we came away with four key insights about AIs potential effect on deterrence. The implications of these findings pose challenges that states will have to reckon with as they integrate AI into their efforts to deter threats ranging from organized criminal activities, to terrorist and cyberattacks, to nuclear conflict and beyond.

AI Poses a Communications Dilemma

First, deterrence does not usually happen on its own. It is the result of countries actively (and occasionally passively) signaling or communicating their intentions, capabilities and expectations to would-be adversaries. Several experts we spoke with stressed that the prerequisites of communication and signaling pose a particular limitation in applying AI to deterrence. Darek Saunders, with the European Border and Coast Guard Agency, noted that no oneno government department or agencyis making public how they are detecting certain things, so threat actors will not know if it is AI, good intelligence or just bad luck that has put them in jeopardy or forced them to divert their efforts elsewhere. Unless governments are willing to publicly clarify how they use AI to monitor certain forms of behavior, it will be nearly impossible to attribute what, if any, utility AI has had in deterring adversaries. Joe Burton with the New Zealand Institute for Security and Crime Science drew parallels with the Cold War to illustrate the limitations of communication in terms of AI-enabled coercion: Deterrence was effective because we knew what a nuclear explosion looked like. If you cant demonstrate what an AI capability looks like, its not going to have a significant deterrence capability. Furthermore, many (if not all) capabilities related to AI require sensitive data, an asset most governments rarely advertise to friends or foes.

But heres the rub: By better communicating AI capability to strengthen deterrence, countries risk inadvertently enabling an adversary to leverage that awareness to circumvent the capability to avoid detection, capture or defeat. With AI, too much information may diminish deterrence. As Richard Pinch, former strategic adviser for mathematics research at the Government Communications Headquarters (GCHQ), explained to us, If we let bad actors know about our capabilities, we would then be educating the adversary on what to watch for.

AI-enabled deterrence is ultimately about finding the right balance between communicating capabilities and safeguarding those capabilities. As Anthony Finkelstein, then-chief scientific adviser for national security in the United Kingdom concluded, You want to ensure your actual systems and technology are not known, at least not technically, while ensuring that the presence of these assets is known. More practical research is needed on developing the AI equivalent of openly testing a nuclear or anti-satellite weapona demonstration of capability and a signaling of force and intent that does not undermine the technology itself.

AI Is Part of a Process

Second, AI is just one piece of a much larger process at work. The purpose of AI-driven security analysis is not chiefly about immediate deterrence but rather about identifying valuable irregularities in adversarial behavior and using that information to inform a broader general deterrence perspective. Using European border security as an example, Dave Palmer, formerly with GCHQ and MI5, explained that its unlikely that AI will deter criminals at the borders on the spot or that you can have a single point in time where the technology will work and you will be able to capture someone doing something they shouldnt be. Instead, it is more likely that AI will allow border security agencies to better identify unlawful behaviour, providing that information downstream to other organizations, agencies or governments that can use it to inform a more comprehensive effort to stop malicious activity. In a nutshell, AI alone might not deter, but AI-enabled information captured within a larger process at work might make some behavior more challenging and less likely to occur.

AI May Lead to Displacement and Adaptation

Third, successfully deterring activities within one domain may invite other unwanted activities within anotherfor example, if AI enables greater deterrence in the maritime domain, it may lead an adversary to pivot elsewhere and prioritize new cyber operations to achieve their objectives. In deterrence theoryespecially as conceived of in criminology and terrorism studiesthis phenomenon is usually understood as displacement (i.e., displacing criminal activities in one domain, or of a particular nature, for another).

Putting this into a larger context, the nature of AIs technological development suggests its application to coercion will invite adversarial adaptation, innovation and mimicry. AI begets AI; new advancements and applications of AI will prompt adversaries to develop technological solutions to counter them. The more sophisticated the adversary, the more sophisticated their AI countermeasures will be, and by association, their countercoercion efforts. State-based adversaries and sophisticated non-state actors, for instance, might manipulate or mislead the technology and data on which these AI-based capabilities rely. As an illustration, when the European Union sought to deter human smuggling via the Mediterranean into Italy and Spain by using AI to increase interdictions, authorities soon realized that smugglers were purposefully sending out misleading cellular data to misinform and manipulate the technology and AI used to track and intercept them.

From the perspective of deterrence theory, adversarial adaptation can be interpreted in different ways. It is a form of success, in that adaptation entails a cost on an adversary and diminishes the feasibility of some activities, therefore augmenting deterrence by denial. But it can also be seen as a failure because adaptation invites greater adversarial sophistication and new types of malicious activity.

How AI Is Used Will Depend on Ethics

Ethics and deterrence do not regularly meet, though a lively debate did emerge during the Cold War as to the morality of nuclear deterrence. AI-enabled deterrence, conversely, might altogether hinge on ethics. Several interviewees discussed concerns about how AI could be used within society. For example, Thorsten Wetzling with the Berlin-based think tank Stiftung Neue Verantwortung suggested that in some instances European countries are approaching AI from entirely different perspectives as a result of their diverging history and strategic culture. Germans appear especially conscious of the potential societal implications of AI, emerging technology, and government reach because of the countrys history of authoritarian rule; as a result, Germany is particularly drawn to regulation and oversight. On the opposite end of the spectrum, Israel tends to be more comfortable using emerging technology and certain forms of AI for national security purposes. Indeed, Israels history of conflict informs its continual push to enhance and improve security and defense. In national security uses, one Israeli interviewee noted, there is little resistance to integrating new technologies like AI.

Other interviewees couched this larger argument as one centered around democracy, rather than strategic culture. Simona Soare with the International Institute for Strategic Studies argued that there are differences in the utility AI has when it comes to deterrence between democracies and non-democracies. One European interviewee noted, for illustration, that any information derived from AI is not simply applied; it is screened through multiple individuals, who determine what to do with the data, and whether or not to take action using it. As AI is further integrated into European security and defense, it is likely that security and intelligence agencies, law enforcement, and militaries will be pressed to justify their use of AI. Ethics may drive that process of justification and, as a result, may figure prominently in AIs use in deterrence and coercion. In China, however, AI has enabled the government to effectively create what a U.S. official has referred to as an open-air prison in Xinjiang province, where the regime has developed, tested and applied a range of innovative technologies to support the countrys discriminatory national surveillance system. The government has leveraged everything from predictive analytics to advanced facial recognition technology designed to identify peoples ethnicities in order to maintain its authoritarian grip. As Ross Andersen argued in the Atlantic in 2020, [President] Xi [Jinping] wants to use artificial intelligence to build a digital system of social control, patrolled by precog algorithms that identify dissenters in real time. Of particular concern to the United States, Canada and other states is the way these technologies have been used to target and oppress Chinas Uighur and other minority populations. Across these examples, the use and utility of AI in deterrence and coercion will be partially informed by the degree to which ethics and norms play a role.

The Future of Deterrence

The concept of deterrence is flexible and has responded to shifting geopolitical realities and emerging technologies. This evolution has taken place within distinct waves of scholarship; a fifth wave is now emerging, with AI (and other technologies) a prevailing feature. In practice, new and concrete applications of deterrence usually follow advancements in scholarship. Lessons drawn from the emerging empirical study of AI-enabled deterrence need to be appropriately applied and integrated into strategy, doctrine and policy. Much still needs to be done. For the United States, AI capabilities need to be translated into a larger process of coercion by way of signaling both technological capacity and political intent that avoids adversarial adaptation but also meets the diverging ethical requirements of U.S. allies. No small feat. However, a failure to think creatively about how best to leverage AI toward deterrence across the domains of warfare, cybersecurity and national security more broadly leaves the United States vulnerable to novel and surprising innovations in coercion introduced by challengers and adversaries. The geopolitics of AI includes a deterrence dimension.

Excerpt from:

Four Things to Consider on the Future of AI-enabled Deterrence - Lawfare

Posted in Ai | Comments Off on Four Things to Consider on the Future of AI-enabled Deterrence – Lawfare

1/ST BET AI Pick of the Week (7-24) – VSiN Exclusive News – News – VSiN

Posted: at 3:26 pm

Weve been tinkering with the Artificial Intelligence programs at 1/ST BET, which gives us more than 50 data points (such as speed, pace, class, jockey, trainer and pedigree stats) for every race based on how you like to handicap.

Last Saturday, our pick on Arklow lost, so we dip to 8-of-18 overall since taking over this feature. Based on a $2 Win bet on the A.I. Pick of the Week, thats $36 in wagers and payoffs totalling $40.40 for a still respectable ROI of $2.24 for every $2 wagered.

This week, I ran Saturdays plays from me and my handicapping friends in my Tuleys Thoroughbred Takes column at vsin.com/horses through the 1/ST BET programs and came up with our A.I. Pick of the Week:

Saturday, July 24

Del Mar Race No. 10 (9:36 p.m. ET/6:36 p.m. PT)

#2 Going Global (4-5 ML odds)

Going Global ranks 1st in 16 of the 52 factors used by 1/ST BET A.I.

This 3-year-old also ranks 1st in 5 of the Top 15 Factors at betmix.com, including best speed in last race, best average speed in last three races and best last turn time.

Shes also ranked in the Top 5 in 8 of the other 10 categories, including rankings No. 2 in average lifetime earnings and average turf earnings, plus No. 3 with trainer/jockey combo with DAmato and Prat.

1/ST BET has a special offer for new customers: Get an instant $10 free upon sign up and then earn $10 for every $1,000 wagered, up to $

Go here to read the rest:

1/ST BET AI Pick of the Week (7-24) - VSiN Exclusive News - News - VSiN

Posted in Ai | Comments Off on 1/ST BET AI Pick of the Week (7-24) – VSiN Exclusive News – News – VSiN

Why AI Is Pushing Marketing Professionals To Reinvent Themselves – Forbes

Posted: at 3:26 pm

Changing course

Todays marketers are challenged to adapt to new technologies, consumer habits and practices, channels, and methods of engagement arguably faster than any other generation. One of the hottest areas of interest is artificial intelligence. How can AI be leveraged to understand, interact with, and generate loyalty with consumers? Raj Venkatesan (Darden Business School), co-author of The AI Marketing Canvas: A Five-Stage Road Map to Implementing Artificial Intelligence in Marketing with Jim Lecinski (Kellogg Business School), shares insight on how marketers must upskill to address the changing marketing landscape.

Kimberly Whitler: How has marketing evolved?

Raj Venkatesan: The art challenges the technology, and the technology inspires the art. I think this quote from Jon Lasseter is very appropriate to consider the evolution of marketing. Like art, marketing also has a symbiotic relationship with technology. In the old days, one had Mass marketing via radio. Then when the new technology of TV came about, we saw the golden era of Television commercials, but we still had one ad for all the customers. Then we had the advent of cable TV and we started segmentation. A brand perhaps had 3 ad versions for their different segments that aired in the appropriate cable channels. With Direct mail, we started the advent of one-to-one marketing, and started to increase our customization. Then we had digital marketing and the rise of the internet we had more customization. With AI now, we are at the stage where brands are personalizing their marketing to a segment of 1.

All through this evolution, though, the fundamentals of marketing hold true. Address customer needs, focus on benefits not features, develop emotional connections, be authentic etc.

Whitler: Given these changes, how should marketing professionals reinventthemselves?

Venkatesan: Invest in yourself, take courses online about AI, attend conferences, try experiments. The key is to focus on your strengths as a marketer, i.e., a deep understanding of customers and their needs. AI and analytics provide you with the tools to obtain customer insights, and use these insights to develop personalized marketing.

Modern marketing professionals are connectors and collaborators. They are the key executive within organizations to advocate for customers. They need the ability to provide data science professionals the right guidance and ask the right questions that can be answers by AI/analytics. We are not at a stage and perhaps never will be in the near future, where AI/Analytics can manage a marketing campaign end to end without human intervention. There is uncertainty in any predictions, and the marketers role is to combine the data driven predictions with their individual heuristics about the customers.

Successful marketers will view data, and data science professionals as their allies and key collaborators.

Whitler: What skillsets should marketing organizations be looking at add?

Venkatesan: Marketers need professionals who understand AI and can work with AI or data science specialists. They also need project managers who are adept at agile product development. Using AI requires a lot of experimentation and agility. Professionals who can manage multiple projects and build flexibility in organizations are critical. There is also a need to work with IT to understand the plethora of technology solutions available to collect, and process customer data. Professionals who are good at understanding the output from analytics and developing customer stories that can provide insights to senior managers are very valuable. Finally, as marketing uses more data, there is a need for organizations to develop skills around privacy, responsible customer data management and cybersecurity.

Join the Discussion: @KimWhitler

In full disclosure, Venkatesan is a colleague at the Darden Business School.

More:

Why AI Is Pushing Marketing Professionals To Reinvent Themselves - Forbes

Posted in Ai | Comments Off on Why AI Is Pushing Marketing Professionals To Reinvent Themselves – Forbes

Not All AI Is Really AI: What You Need to Know – SHRM

Posted: at 3:26 pm

A wide range of technology solutions purport to be "driven by AI," or artificial intelligence. But are they really? Not everything labeled AIis truly artificial intelligence. The technology, in reality, has not advanced nearly far enough to actually be "intelligent."

"AI is often a sensationalized topic," said Neil Morelli, chief industrial and organizational psychologist for Codility, an assessment platform designed to identify the best tech talent. "That makes it easy to swing from one extreme reaction to another," he said.

"On the one hand, fear of AI's misuse, 'uncontrollability,' and 'black box' characteristics. And on the other hand, a gleeful, over-hyped optimism and adoption based on overpromising or misunderstanding AI's capabilities and limitations." Both can lead to negative outcomes, he said.

Much of the confusion that exists over what AI is, or isn't, is driven by the overly broad use of the term, fueled to a large degree by popular entertainment, the media and misinformation.

[Want to learn more about the future of work? Join us at theSHRM Annual Conference & Expo 2021, taking place Sept. 9-12 in Las Vegas and virtually.]

What Is AI, Really?

"Much of what is labeled as 'artificial intelligence'today is not,"said Peter Scott, the founding director of Next Wave Institute, a technology training and coaching firm. "This mislabeling is so common we call it 'AI-washing.' "

The boundaries have often shifted when it comes to AI, he said. "AI has been described as 'what we can't do yet,' because as soon as we learn how to do it, we stop calling it AI."

The ultimate goal of AI, Scott said, "is to create a machine that thinks like a human, and many people feel that anything short of that doesn't deserve the name." That's one extreme.

On the other hand, most of those in the field "will say that if it uses machine learning, especially if it uses deep learning, then it is AI," he said. Officially, "AI is a superset of machine learning, which leaves enough wiggle room for legions of advertisers to ply their trade, because the difference between the two is not well-defined."

Jeff Kiske, director of engineering, machine learning at Ripcord, agrees. Most of what is called AI today could better be referred to as "machine learning," he said. This, he added, is how he prefers to refer to "cutting-edge, data-driven technology." The term machine learning, noted Kiske, "implies that the computer has learned to model a phenomenon based on data. When companies tout their products as 'driven by machine learning,' I would expect a significantly higher level of sophistication."

Joshua A. Gerlick, a Fowler Fellow at Case Western Reserve University in Cleveland, said that AI "is an incredibly broad field of study that encompasses many technologies." At the risk of oversimplification, he said, "a common theme that differentiates a 'true' from a 'misleading' AI system is whether it learns from patterns and features in the data that it is analyzing."

This is the promise of many use-cases in HR for machine learning that actually don't rise to the level of true artificial intelligence.

Implications for HR

For example, Gerlick said: "Imagine a human resources department acquiring software that is 'powered by AI' to match newly hired employees with an experienced mentor within the organization. The software is programmed to find common keywords in both the profiles of the mentees and potential mentors, and a selection is obtained based upon the highest mutual match." While an algorithm is certainly facilitating the matching process within the software, Gerlick said, "it is absolutely not an AI-powered algorithm. This algorithm is simply replicating a process that any human could accomplish, and although it is fast, it does not make the matchmaking process more effective."

A truly AI-powered software platform, he said, would require some initial datalike profiles of previous mentee-mentor pairs and whether the outcomes were successful. It would then learn the factors that led to a successful pairing. "In fact, the software would be so sensitive that it might only be applicable to identifying successful mentee-mentor pairs at this one specific organization," Gerlick said. "In a roundabout way, it has 'learned' how to understand the unique culture of the organization and the archetypes of individuals who work within it. A human resources executive should find that the AI-powered software platform improves its effectiveness over timeand hopefully exceeds the success of its human counterparts, leaving them the time to undertake more complex initiatives."

Christen da Costa, founder of Gadgetreview.com, saidhe thinks the term "AI" is thrown around far too readily. "Most automation tools, for example, are not what I would call AI," he noted. "They take in information fed to them by the user and look for cases that match it. Over time they learn the user's preferences and become better, but that's algorithmic learning. While it can be an aspect of AI, it does not an AI make."

Does it matter? It can. When HR professionals are considering adopting new technology, it's important to not be confusedor swayedby lofty tech terms that tend to be thrown around far too frequently.

It's also important to not be overly enamored of, or potentially misled by, the lure of "artificial intelligence."

"Thoughtful readers and observers of AI in HR would be wise to remember that AI systems help perform manual, repetitious and laborious tasks in HR," Codility'sMorelli said. "However, the range and scope of these tasks are probably narrower than some vendors and providers lead people to believe."

There is no AI system that understands, perceives, learns, pattern-matches or adapts on its own, he said. "Instead, it needs human-labeled and curated data as a starting point. For this reason, users and evaluators should apply more scrutiny to the training data used to teach AI systems," he said, "especially the data's origin, development and characteristics."

"When skeptical over whether a technology is truly 'powered by AI,' consider asking a few simple questions," Gerlick suggested:

If the answers to those questions are yes, he said, "then artificial intelligence might be lending a helping hand."

Lin Grensing-Pophal is a freelance writer in Chippewa Falls, Wis.

See the original post here:

Not All AI Is Really AI: What You Need to Know - SHRM

Posted in Ai | Comments Off on Not All AI Is Really AI: What You Need to Know – SHRM

A man used AI to bring back his deceased fianc. But the creators of the tech warn it could be dangerous and used to spread misinformation. – Yahoo…

Posted: at 3:26 pm

GPT-3 is a computer program that attempts to write like humans. Fabrizio Bensch/ Reuters

A man used artificial intelligence (AI) to create a chatbot that mimicked his late fianc.

The groundbreaking AI technology was designed by Elon Musk's research group OpenAI.

OpenAI has long warned that the technology could be used for mass information campaigns.

See more stories on Insider's business page.

After Joshua Barbeau's fianc passed away, he spoke to her for months. Or, rather, he spoke to a chatbot programmed to sound exactly like her.

In a story for the San Francisco Chronicle, Barbeau detailed how Project December, a software that uses artificial intelligence technology to create hyper-realistic chatbots, recreated the experience of speaking with his late fianc. All he had to do was plug in old messages and give some background information, and suddenly the model could emulate his partner with stunning accuracy.

It may sound like a miracle (or a Black Mirror episode), but the AI creators warn that the same technology could be used to fuel mass misinformation campaigns.

Project December is powered by GPT-3, an AI model designed by the Elon Musk-backed research group OpenAI. By consuming massive datasets of human-created text (Reddit threads were particularly helpful), GPT-3 can imitate human writing, producing everything from academic papers to letters from former lovers.

It's some of the most sophisticated - and dangerous - language-based AI programming to date.

When OpenAI released GPT-2, the predecessor to GPT-3, the group wrote that it can potentially be used in "malicious ways." The organization anticipated bad actors using the technology could automate "abusive or faked content on social media," "generate misleading news articles," or "impersonate others online."

GPT-2 could be used to "unlock new as-yet-unanticipated capabilities for these actors," the group wrote.

OpenAI staggered the release of GPT-2, and still restricts access to the superior GPT-3, in order to "give people time" to learn the "societal implications" of such technology.

Story continues

Misinformation is already rampant on social media, even with GPT-3 not widely available. A new study found that YouTube's algorithm still pushes misinformation, and the nonprofit Center for Countering Digital Hate recently identified 12 people responsible for sharing 65 percent of COVID-19 conspiracy theories on social media. Dubbed the "Disinformation Dozen," they have millions of followers.

As AI continues to develop, Oren Etzioni, CEO of the non-profit, bioscience research group, Allen Institute, previously told Insider it will only become harder to tell what's real.

"The question 'Is this text or image or video or email authentic?' is going to become increasingly difficult to answer just based on the content alone," he said.

Read the original article on Business Insider

Read the original post:

A man used AI to bring back his deceased fianc. But the creators of the tech warn it could be dangerous and used to spread misinformation. - Yahoo...

Posted in Ai | Comments Off on A man used AI to bring back his deceased fianc. But the creators of the tech warn it could be dangerous and used to spread misinformation. – Yahoo…

How AI Will Help Keep Time at the Tokyo Olympics – WIRED

Posted: at 3:26 pm

In volleyball, we're now using cameras with computer vision technologies to track not only athletes, but also the ball, says Alain Zobrist, head of Omega Timing. So it's a combination where we use camera technology and artificial intelligence to do this.

Omega Timing's R&D department comprises 180 engineers, and the development process started with positioning systems and motion sensor systems in-house, according to Zobrist, in 2012. The goal was to get to a point where, for multiple sports at the 500-plus sports events it works on each year, Omega could provide detailed live data on athlete performance. That data would also have to take less than a tenth of a second to be measured, processed, and transmitted during events so that the information matches what viewers are seeing on screen.

With beach volleyball, this meant taking this positioning and motion technology and training an AI to recognize myriad shot typesfrom smashes to blocks to spikes and variations thereofand pass types, as well as the ball's flight path, then combine this data with information gleaned from gyroscope sensors in the players clothing. These motion sensors let the system know the direction of movement of the athletes, as well as height of jumps, speed, etc. Once processed, this is all then fed live to broadcasters for use in commentary or on-screen graphics.

According to Zobrist, one of the hardest lessons for the AI to learn was accurately tracking the ball in play when the cameras could no longer see it. Sometimes, it's covered by an athlete's body part. Sometimes it's out of the TV frame, he says. So, the challenge was to track the ball when you have lost it. To have the software predict where the ball goes, and then, when it appears again, recalculate the gap from when it lost the object and got it back, and fill in the [missing] data and then continue automatically. That was the one of the biggest issues.

It's this tracking of the ball that is crucial for the AI to determine what is happening during play. When you can track the ball, you will know where it was located and when it changed direction. And with the combination of the sensors on the athletes, the algorithm will then recognize the shot, Zobrist says. Whether it was a block or a smash. You will know which team and which player it was. So it's this combination of both technologies that allows us to be accurate in the measurement of the data.

Omega Timing claims its beach volleyball system is 99 percent accurate, thanks to the sensors and multiple cameras running at 250 frames a second. Toby Breckon, professor in computer vision and image processing at Durham University, however, is interested to see if this stands up during the Gamesand, crucially, if the system is fooled by differences in race and gender.

What has been done is reasonably impressive. And you would need a large data set to train an AI on all the different moves, Breckon says. But one of the things is accuracy. How often does it get it wrong in terms of those different moves? How often does it lose track of the ball? And also if it works uniformly over all races and genders. Is that 99 percent accuracy on, say, the USA women's team and 99 percent accuracy on the Ghanaian women's team?

Zobrist is confident, and explains that while it may have been easier to call in Google or IBM to supply the AI expertise needed, this was not an option for Omega. What is extremely important, whether it's for a scoring sport, or timing sport, is that we can't have discrepancies between the explanation of the performance and the ultimate result," he says. "So to protect the integrity of the result, we cannot rely on another company. We need to have the expertise to be able to explain the result and how the athletes got there.

As for future timing and tracking upgrades, Zobrist is tight-lipped, but says the Paris Games in 2024 will be key. You will see a whole new set of innovations. Of course, it will remain around timekeeping, scoring, and certainly also around motion sensors and positioning systems. And certainly also Los Angeles in 2028. We've got some really interesting projects for there that actually we've only just started.

More Great WIRED Stories

Read the original:

How AI Will Help Keep Time at the Tokyo Olympics - WIRED

Posted in Ai | Comments Off on How AI Will Help Keep Time at the Tokyo Olympics – WIRED

This AI can spot sunken ships from the damn sky – The Next Web

Posted: at 3:26 pm

The Research Brief is a short take about interesting academic work.

In collaboration with the United States Navys Underwater Archaeology Branch, I taught a computer how to recognize shipwrecks on the ocean floor from scans taken by aircraft and ships on the surface. The computer model we created is 92% accurate in finding known shipwrecks. The project focused on the coasts of the mainland U.S. and Puerto Rico. It is now ready to be used to find unknown or unmapped shipwrecks.

The first step in creating the shipwreck model was to teach the computer what a shipwreck looks like. It was also important to teach the computer how to tell the difference between wrecks and the topography of the seafloor. To do this, I needed lots of examples of shipwrecks. I also needed to teach the model what the natural ocean floor looks like.

Conveniently, the National Oceanic and Atmospheric Administration keeps a public database of shipwrecks. It also has a large public database of different types of imagery collected from around the world, including sonar and lidar imagery of the seafloor. The imagery I used extends to a little over 14 miles (23 kilometers) from the coast and to a depth of 279 feet (85 meters). This imagery contains huge areas with no shipwrecks, as well as the occasional shipwreck.

Finding shipwrecks is important for understanding the human past think trade, migration, war but underwater archaeology is expensive and dangerous. A model that automatically maps all shipwrecks over a large area can reduce the time and cost needed to look for wrecks, either with underwater drones or human divers.

The Navys Underwater Archaeology Branch is interested in this work because it could help the unit find unmapped or unknown naval shipwrecks. More broadly, this is a new method in the field of underwater archaeology that can be expanded to look for various types of submerged archaeological features, including buildings, statues and airplanes.

This project is the first archaeology-focused model that was built to automatically identify shipwrecks over a large area, in this case the entire coast of the mainland U.S. There are a few related projects that are focused on finding shipwrecks using deep learning and imagery collected by an underwater drone. These projects are able to find a handful of shipwrecks that are in the area immediately surrounding the drone.

Wed like to include more shipwreck and imagery data from all over the world in the model. This will help the model get really good at recognizing many different types of shipwrecks. We also hope that the Navys Underwater Archaeology Branch will dive to some of the places where the model detected shipwrecks. This will allow us to check the models accuracy more carefully.

Im also working on a few other archaeological machine learning projects, and they all build on each other. The overall goal of my work is to build a customizable archaeological machine learning model. The model would be able to quickly and easily switch between predicting different types of archaeological features, on land as well as underwater, in different parts of the world. To this end, Im also working on projects focused on finding ancient Maya archaeological structures, caves at a Maya archaeological site and Romanian burial mounds.

This article byLeila Character, Doctoral student in Geography, The University of Texas at Austin College of Liberal Arts,is republished from The Conversation under a Creative Commons license. Read the original article.

See the rest here:

This AI can spot sunken ships from the damn sky - The Next Web

Posted in Ai | Comments Off on This AI can spot sunken ships from the damn sky – The Next Web

Deadline 2024: Why you only have 3 years left to adopt AI – VentureBeat

Posted: at 3:26 pm

All the sessions from Transform 2021 are available on-demand now. Watch now.

If your company has yet to embrace AI, youre in a race against the clock. And by my calculations, you have just three years left.

How did I arrive at 2024 as the deadline for AI adoption? My prediction formulated with KUNGFU.AI advisor Paco Nathan is rooted in us noticing that many futurists J curves show innovations typically have a 12-to-15-year window of opportunity, a period between when a technology emerges and when it reaches the point of widespread adoption.

While AI can be traced to the mid-1950s and machine learning dates back to the late 1970s, the concept of deep learning was popularized by the AlexNet paper published in 2012. Of course, its not just machine learning that started the clock ticking.

Though cloud computing was initially introduced in 2006, it didnt take off until 2010 or so. The rise of data engineering can also be traced to the same year. The original paper for Apache Spark was published in 2010, and it became foundational for so much of todays distributed data infrastructure.

Additionally, the concept of data science has a widely reported inception date of 2009. Thats when Jeff Hammerbacher, DJ Patil and others began getting recognized for leading data science teams and helping define the practice.

If you do the math, those 20092012 dates put us within that 12-to-15-year window. And that makes 2024 the cutoff for companies hoping to gain a competitive advantage from AI.

If you look at the graph below from Everett Rogers Diffusion of Innovations youll get a sense of how those who wait to put AI into production will miss out on cornering the market. Here the red line shows successive groups adopting new technology while the purple line shows how market share eventually reaches a saturation level.

Source: Everett Rogers, Diffusion of Innovations

A 2019 survey conducted by the MIT Sloan Management Review and Boston Consulting Group explicitly shows how the Diffusion of Innovations theory applies to AI. Their research was based on a global survey of more than 3,000 executives, managers, and analysts across various industries.

Once the responses to questions around AI understanding and adoption were analyzed, survey respondents were assigned to one of four distinct categories:

Pioneers (20%) These organizations possess a deep knowledge of AI and incorporate it into their offerings and internal processes. Theyre the trailblazers.

Investigators (30%) These organizations understand AI but arent deploying it beyond the pilot stage. Theyre taking more of a look before you leap approach.

Experimenters (18%) These organizations are piloting AI without truly understanding it. Their strategy is fake-it-until-you-make-it.

Passives (32%) These organizations have little-to-no understanding of AI and will likely miss out on the opportunity to profit from it.

The 2020 survey, which uses the same questions and methodology, gives even greater insight into how executives embrace AI. 87% believe AI will offer their companies an advantage over others. Just 59% of companies, however, have an AI strategy.

Comparing the MIT and BCG 2020 survey responses to those since the surveys inception in 2017, a growing number of execs recognize that competitors are using AI. Yet only one in 10 companies are using AI to generate significant financial benefits.

I anticipate this gap between leaders and laggards will continue widening, making this your companys last chance to take action before 2024 (if it hasnt already).

MIT and BCGs 2020 data reveals that companies focused on the initial steps of AI adoption (ensuring data, talent, and a strategy are in place) will have a 21% chance of becoming a market leader. When companies begin to iterate on AI solutions with their organizational users (effectively adopting AI and applying it across multiple use cases) that chance rises to 39%. And those that can orchestrate the macro and micro interactions between humans and machines (sharing knowledge amongst both and smartly structuring those interactions) will have a 73% chance of market leadership.

Building upon MIT and BCGs success predictions, McKinsey & Company has specifically broken down how AI integration impacts revenue in this 2020 chart.

Source: McKinsey & Company Global Survey, 2020

While the ROI for AI integration can be immediate, thats not typically the case. According to MIT and BCGs 2019 data, only two out of three companies that have made some investment in AI (Investigators and Experimenters) report gains within three years. This stat improves to three out of five when companies that have made significant investments in AI (Pioneers) are included.

The 2020 MIT/BCG data builds upon this, claiming companies that use AI to make extensive changes to many business processes are 5X more likely to realize a major financial benefit vs. those making small or no changes to a few business processes.

So where will you be in 2024? On your way to reaping the rewards of AI, or lamenting that you missed an opportunity for market advantage?

Steve Meier is a co-founder and Head of Growth at AI services firm KUNGFU.AI.

Read more:

Deadline 2024: Why you only have 3 years left to adopt AI - VentureBeat

Posted in Ai | Comments Off on Deadline 2024: Why you only have 3 years left to adopt AI – VentureBeat

AI Weekly: OpenAIs pivot from robotics acknowledges the power of simulation – VentureBeat

Posted: at 3:26 pm

All the sessions from Transform 2021 are available on-demand now. Watch now.

Late last week, OpenAI confirmed it shuttered its robotics division in part due to difficulties in collecting the data necessary to break through technical barriers. After years of research into machines that can learn to perform tasks like solving a Rubiks Cube, company cofounder Wojciech Zaremba said it makes sense for OpenAI to shift its focus to other domains, where training data is more readily available.

Beyond the commercial motivations for eschewing robotics in favor of media synthesis and natural language processing, OpenAIs decision reflects a growing philosophical debate in AI and robotics research. Some experts believe training systems in simulation will be sufficient to build robots that can complete complex tasks, like assembling electronics. Others emphasize the importance of collecting real-world data, which can provide a stronger baseline.

A longstanding challenge in simulations involving real data is that every scene must respond to a robots movements even though those that might not have been recorded by the original sensor. Whatever angle or viewpoint isnt captured by a photo or video has to be rendered or simulated using predictive models, which is why simulation has historically relied on computer-generated graphics and physics-based rendering that somewhat crudely represents the world.

But Julian Togelius, an AI and games researcher and associate professor at New York University, notes that robots pose challenges that dont exist within the confines of simulation. Batteries deplete, tires behave differently when warm, and sensors regularly need to be recalibrated. Moreover, robots break and tend to be slow and cost a pretty penny. The Shadow Dexterous Hand, the machine that OpenAI used in its Rubiks Cube experiments, has a starting price in the thousands. And OpenAI had to improve the hands robustness by reducing its tendon stress.

Robotics is an admirable endeavor, and I very much respect those who try to tame the mechanical beasts, Togelius wrote in a tweet. But theyre not a reasonable way to do reinforcement learning, or any other episode-hungry type of learning. In my humble opinion, the future belongs to simulations.

Gideon Kowadlo, the cofounder of Cerenaut, an independent research group developing AI to improve decision making, argues that no matter how much data is available in the real world, theres more data in simulation data thats easier to control, ultimately. Simulators can synthesize different environments and scenarios to test algorithms under rare conditions. Moreover, they can randomize variables to create diverse training sets with varying objects and environment properties.

Indeed, Ted Xiao, a scientist at Googles robotics division, says that OpenAIs move away from work with physical machines doesnt have to signal the end of the labs research along this direction. By applying techniques including reinforcement learning to tasks like language and code understanding, OpenAI might be able to develop more capable systems that can then be applied back to robotics. For example, many robotics labs use humans holding controllers to generate data to train robots. But a general AI system that understands controllers (i.e., video games) and the video feeds from robotics with cameras might learn to teleoperate quickly.

Recent studies hint at how a simulation-first approach to robotics might work. In 2020, Nvidia and Stanford developed a technique that decomposes vision and control tasks into machine learning models that can be trained separately. Microsoft has created an AI drone navigation system that can reason out the correct actions to take from camera images. Scientist at DeepMind trained acube-stacking system to learn from observation in a simulated environment. And a team at Google detailed a framework that takes a motion capture clip of an animal and uses reinforcement learning to train a control policy, employing an adaptation technique to randomize the dynamics in the simulation by, for example, varying mass and friction.

In a blog post in 2017, OpenAI researchers wrote that they believe general-purpose robots can be built by training entirely in simulation, followed by a small amount of self-calibration in the real world. This would increasingly appear to be the case.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

View original post here:

AI Weekly: OpenAIs pivot from robotics acknowledges the power of simulation - VentureBeat

Posted in Ai | Comments Off on AI Weekly: OpenAIs pivot from robotics acknowledges the power of simulation – VentureBeat

Page 116«..1020..115116117118..130140..»