Daily Archives: May 18, 2023

3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI … – InvestorPlace

Posted: May 18, 2023 at 2:01 am

Source: shutterstock.com/cono0430

Its understandable if some financial traders are skeptical of enterprise artificial intelligence (AI) company C3.ai(NYSE:AI). After all, AI stock rallied hard in early 2023. Yet, C3.ais growth story isnt over yet. There are still reasons to think about investing in this highly touted software startup.

It seems like every publicly listed technology company is jumping on the machine-learning bandwagon nowadays. CEOs are purposely mentioning AI multiple times during conference calls, just to drum up investor interest.

In contrast, C3.ai definitely isnt a bandwagon jumper. The company was been a machine-learning mainstay before the trend picked up steam in 2023. So, lets recap three great reasons to think about buying AI stock now.

By a long shot, C3.ai isnt the biggest company involved in machine learning. As of this writing, C3.ai is No. 13 among the largest AI businesses based on market capitalization.

On the other hand, youre definitely not getting pure-play machine-learning exposure if you invest in Microsoft (NASDAQ:MSFT) or Nvidia(NASDAQ:NVDA). Unlike those tech titans, C3.ai is, to quote Alex Sirois, considered to be among the most direct ways to play the AI boom.

Sure, Microsoft invested in the technology of an AI company (specifically,OpenAIs ChatGPT chatbot). However, C3.ai actually is an AI company first and foremost. This isnt to suggest that you shouldnt invest in Microsoft, Nvidia and so on. Its possible to own shares of a variety of technology companies, while also boosting your portfolios machine-learning exposure with AI stock.

C3.ai serves the public and private sectors and has significant clients in both of those categories. The companys public-sector clients include the Sheriffs Office of San Mateo County, Calif., and even the U.S. Air Force.

Furthermore, C3.ais private-sector clients include such corporate giants as Shell (NYSE:SHEL), Consolidated Edison (NYSE:ED) and Raytheon Technologies (NYSE:RTX). With heavy hitters like those on C3.ais roster of clients, one might expect the company to generate robust revenue.

And indeed, C3.ai has proven itself in that regard. During the third fiscal quarter of 2023, C3.ai generated $66.7 million in total revenue, exceeding the companys guidance of $63 million to $65 million.

Ill admit, folks who took a share position in C3.ai in early April entered into a crowded trade. If they held on to their stake in C3.ai, theyre surely underwater on their investment now.

You might hear analysts warning financial traders about chasing the rally in AI stock. Yet, that rally is old news by now. The stock has pulled back, thereby allowing new investors to get on board and prior shareholders to reduce their cost basis.

In other words, you dont have to worry about being a hype-chaser if you choose to invest in C3.ai now. The C3.ai share price is close to where it was in February of this year, before machine-learning mania took over the financial markets. Therefore, dont hesitate to give C3.ai a chance, as the company deserves a place in your AI-friendly portfolio right now.

On the date of publication, David Moadeldid not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

David Moadel has provided compelling content and crossed the occasional line on behalf of Motley Fool, Crush the Street, Market Realist, TalkMarkets, TipRanks, Benzinga, and (of course) InvestorPlace.com. He also serves as the chief analyst and market researcher for Portfolio Wealth Global and hosts the popular financial YouTube channel Looking at the Markets.

Read more from the original source:

3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI ... - InvestorPlace

Posted in Ai | Comments Off on 3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI … – InvestorPlace

Zoom makes a big bet on AI with investment in Anthropic – VentureBeat

Posted: at 2:01 am

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Zoom is going all in on generative AI. After announcing a partnership with OpenAI in March, the enterprise communication company today said it is teaming up with AI startup Anthropic to integrate Anthropics Claude AI assistant into Zooms productivity platform. The company has also made an investment of an undisclosed amount in Google-backed Anthropic through its global investment arm.

The partnership, a part of Zooms federated approach to AI, comes as Microsoft continues to roll out AI-powered smarts in Teams, Google brings AI into Workspace and Salesforce focuses on Slack GPT.

However, Zoom says it will first incorporate Claude to evolve its omnichannel contact center offerings before moving on to other segments of the platform. It did not share when or how the broader integration would be executed.

Zooms Contact Center is a video-first support hub that improves customer support for enterprises. It includes multiple products, including Zoom Virtual Agent and Zoom Workforce Management.

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

With the Anthropic partnership, Zoom plans to integrate Claude across the entire Contact Center portfolio to build self-service features that not only improve end-user outcomes but also enable superior agent experiences.

For instance, it will be able to understand customers intent from their inputs and guide them to the best solution, and provide actionable insights that managers can use to coach agents.

Anthropics Constitutional AI model is primed to provide safe and responsible integrations for our next-generation innovations, beginning with the Zoom Contact Center portfolio, said Smita Hashim, chief product officer at Zoom. With Claude guiding agents toward trustworthy resolutions and powering self-service for end users, companies will be able to take customer relationships to another level.

Moving ahead, Zoom Contact Center will also use Claude to provide the right resources to agents, enabling them to deliver improved customer service, a company spokesperson told VentureBeat. They added that Claudes capabilities will be expanded across the Zoom platform which includes Team Chat, Meetings, Phone and Whiteboard but did not share specific details.

The partnership with Anthropic is Zooms latest move in its federated approach to AI, where it is using its own proprietary AI models along with those from leading AI companies and select customers own models.

With this flexibility to incorporate multiple types of models, our goal is to provide the most value for our customers diverse needs. These models are also customizable, so they can be tuned to a given companys vocabulary and scenarios for better performance, Hashim said in a blog post.

Zoom has already been working with OpenAI for IQ, its conversational intelligence product. In fact, back in March, Zoom announced multiple AI-powered capabilities for the product with OpenAI, including the ability to generate draft messages and emails and provide summaries for chat threads. The capabilities started rolling out for select customers in April.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Visit link:

Zoom makes a big bet on AI with investment in Anthropic - VentureBeat

Posted in Ai | Comments Off on Zoom makes a big bet on AI with investment in Anthropic – VentureBeat

AI voice phone scams are on the rise. Here’s how to avoid them – USA TODAY

Posted: at 2:01 am

Jennifer Jolly| Special to USA TODAY

The most powerful people on the planet dont quite know what to make of AI as it quickly becomes one of the most significant new technologies in history.

But criminals sure do.

In the six months since OpenAI first unleashed ChatGPT on the masses and ignited an artificial intelligence arms race with the potential to reshape history a new strain of cybercriminals has been among the first to cash in.

These next-gen bandits come armed with sophisticated new tools and techniques to steal hundreds of thousands of dollars from people like you and me.

I am seeing a highly concerning rise in criminals using advanced technology AI-generated deepfakes and cloned voices to perpetrate very devious schemes that are almost impossible to detect, Haywood Talcove, CEO of LexisNexis Risk Solutions' Government Group, a multinational information and analytics company based in Atlanta told me over Zoom.

AI-generated images already fool people: Why experts say they'll only get harder to detect.

Competition in cyberspace: Google ups the ante on AI to compete with ChatGPT. Here's how search and Gmail will change.

If you get a call in the middle of the night and it sounds exactly like your panicked child or grandchild saying, help, I was in a car accident, the police found drugs in the car, and I need money to post bail (or for a retainer for a lawyer), its a scam, Talcove explained.

Earlier this year, law enforcement officials in Canada say one man used AI-generated voices he likely cloned from social media profiles to con at least eight senior citizens out of $200,000 in just three days.

Senior scam: An elderly man was scammed out of millions. Could the bank have done more to prevent fraud?

The what-if scenarios: Fear over AI dangers grows as some question if tools like ChatGPT will be used for evil

Similar scams preying on parents and grandparents are also popping up in nearly every state in America. This month, several Oregon school districts warned parents about a spate of fake kidnapping calls.

The calls come in from an unknown caller ID (though even cell phone numbers are easy to spoof these days). A voice comes on that sounds exactly like your loved one saying theyre in trouble. Then they get cut off, you hear a scream, and another voice comes on the line demanding ransom, or else.

The FBI, FTC, and even the NIH warn of similar scams targeting parents and grandparents across the United States. In the last few weeks, its happened in Arizona, Illinois, New York, New Jersey, California, Washington, Florida, Texas, Ohio, Virginia, and many others.

An FBI special agent in Chicago told CNN that families in America lose an average of $11,000 in each fake-kidnapping scam.

Talcove recommends having a family password that only you and your closest inner circle share. Dont make it anything easily discovered online either no names of pets, favorite bands, etc. Better yet, make it two or three words that you discuss and memorize. If you get a call that sounds like a loved one, ask them for the code word or phrase immediately.

If the caller pretends to be law enforcement, tell them you have a bad connection and will call them back. Ask the name of the facility theyre calling from (campus security, local jail, the FBI), and hang up (even though scammers will say just about anything to get you to stay on the line). If you cant reach your loved one, look up the phone number of that facility or call your local law enforcement and tell them whats going on.

Whatis ChatGPT?: Everything to know about OpenAI's free AI essay writer and how it works

New Twitter CEO: What to know about Linda Yaccarino, Elon Musk's pick

Remember, these criminals use fear, panic, and other proven tactics to get you to share personal information or send money. Usually, the caller wants you to wire money, transfer it directly via Zelle or Venmo, send cryptocurrency, or buy gift cards and give them the card numbers and PINs. These are all giant red flags.

Also, be more careful than ever about what information you put out into the world.

An FTC alert also suggests calling the person who supposedly contacted you to verify the story, use a phone number you know is theirs. If you cant reach your loved one, try to get in touch with them through another family member or their friend, it says on its website.

A criminal only needs three seconds of audio of your voice to clone it, Talcove warns. Be very careful with social media. Consider making your accounts private. Don't reveal the names of your family or even your dog. This is all information that a criminal armed with deepfake technology could use to fool you or your loved ones into a scam.

Talcove shared a half dozen how-to video clips he says he pulled from the dark web showing these scams in action. He explained that criminals often sell information on how to create these deepfakes to other fraudsters.

I keep my eyes on criminal networks and emerging tactics. We literally monitor social media and the dark web and infiltrate criminal groups, he added. Its getting scary. For example, filters can be applied over Zoom to change somebodys voice and appearance. A criminal who grabs just a few seconds of audio from your [social media feeds], for example, can clone your voice and tone.

I skipped all the organized crime parts and just Googled AI voice clone. I wont say exactly which tool I used, but it took me less than ten minutes to upload 30 seconds of my husbands voice from a video saved on my smartphone to an AI audio generator online, for free. I typed in a few funny lines I wanted him to say, saved it on my laptop, and texted it to our family. The most challenging part was transferring the original clip from a .mov to a .wav file (and thats easy too).

It fooled his mom, my parents, and our children.

We're all vulnerable, but the most vulnerable among us are our parents and grandparents, Talcove says. 99-in-100 people couldn't detect a deepfake video or voice clone. But our parents and grandparents, categorically, are less familiar with this technology. They would never suspect that the voice on the phone, which sounds exactly like their child screaming for help during a kidnapping, might be completely artificial.

More from Jennifer Jolly:

Jennifer Jolly is an Emmy Award-winning consumer tech columnist. The views and opinions expressed in this column are the author's and do not necessarily reflect those of USA TODAY.

View original post here:

AI voice phone scams are on the rise. Here's how to avoid them - USA TODAY

Posted in Ai | Comments Off on AI voice phone scams are on the rise. Here’s how to avoid them – USA TODAY

Amazon is building an AI-powered conversational experience for … – The Verge

Posted: at 2:01 am

Amazon is pitching these changes to search as absolutely massive. This will be a once in a generation transformation for Search, just like the Mosaic browser made the Internet easier to engage with three decades ago, Amazon wrote. If you missed the 90sWWW, Mosaic, and the founding of Amazon and Googleyou dont want to miss this opportunity. And we might be seeing the changes sooner rather than later, as Amazon wants to deliver this vision to our customers right away.

Its understandable why Amazon seems to be racing here. A chatbot can be a useful starting point when youre looking to buy something with specific parameters. And just last week, Google showed how its new AI-powered Search Generative Experience can create buying guides from a single search. Amazon certainly doesnt want to lose any ground in shopping, so its not surprising that the company wants to introduce its own chatbot very soon.

That said, its unclear when this new experience might actually be released or what it might look like. When we asked for comment, Amazon spokesperson Keri Bertolino only shared this: We are significantly investing in generative AI across all of our businesses. And given the general state of AI chatbots right now, Im not confident the chatbot will be all that good. (In our comparison from March, ChatGPT generally beat out Microsofts Bing and Googles Bard.)

Still, it seems extremely likely that conversational shopping is coming to Amazon in the not-too-distant future, so you might as well get prepared for the search experience on Amazon to get even more cluttered. Hopefully Amazon makes this new experience optional, like Google is for its own generative AI search.

Original post:

Amazon is building an AI-powered conversational experience for ... - The Verge

Posted in Ai | Comments Off on Amazon is building an AI-powered conversational experience for … – The Verge

AI speculators need to ‘differentiate between actual spending and investment’ and hype: Strategist – Yahoo Finance

Posted: at 2:01 am

Tematica Research CIO Chris Versace and J.P. Morgan Asset Management Global Market Strategist Meera Pandit discuss how the proliferation of AI speculation has impacted markets at large.

BRAD SMITH: Just this morning, Alphabet hit $1 and 1/2 trillion in market cap for the first time in over a year. The tech giant getting a boost from the AI hype. And shares of Alphabet are up 35% this year after a dismal 2022, right alongside Microsoft, Meta, and Amazon, which are all enjoying double-digit gains for the year. So is this rally all AI hype here? I mean, have we seen so many companies just mention AI and the market say, all right, yeah, automatically there's a multiplier effect--

CHRIS VERSACE: So I'm going--

BRAD SMITH: --that you have to benefit from?

CHRIS VERSACE: So I'm going to use one of my favorite words called hopium. There is, I think, a lot of that in there because when you look at the end markets, like-- like for Microsoft, you know, data center, OK, moving along, not necessarily shooting the lights out, PCs continue to be weak, and you look at some of the other end markets also for NVIDIA, for example, what has been the common thread here? It's all been, what you just said, the mention of AI.

And what bothers me a little bit about it is how companies like PepsiCo, Wendy's, and others are starting to talk about how they will be using AI and how that's going to really change their business. And then all of a sudden, I flash back to 1999, 2000 in the dot-com era when all sorts of companies were like, well, we're no longer X company. We're x.com company. And it's almost like I got to be in the game. I got to say something.

BRAD SMITH: Which was actually different for Apple because they didn't bring it up voluntarily. It got brought up in a question for them.

CHRIS VERSACE: Yes. That's 100% correct.

JULIE HYMAN: Although for Apple, it makes more sense than--

BRAD SMITH: It does. It makes way more sense than--

Story continues

JULIE HYMAN: --Coke or Pepsi--

BRAD SMITH: --Wendy's.

JULIE HYMAN: --for example, where it's, like, a random bolt on.

CHRIS VERSACE: Well, but I mean, if you think about your iPhone, you're a carrier-- you are already carrying AI around with you day in, day out. So I'm concerned a little bit that this is overdone, right? You know, typically, when we have some new-new thing on the technology front, expectations do get big and company shares can get out over their skis. So the question to me is, what's the pop in that potential bubble? That's what I'm watching for.

JULIE HYMAN: I want to ask you about a specific one of those names that you mentioned because I know, Meera, you talk sort of more broadly. But I want to ask about Nvidia because this is a stock that has doubled--

CHRIS VERSACE: Yes.

JULIE HYMAN: --this year. It hasn't even reported its earnings yet. That's set for, what--

CHRIS VERSACE: Next week.

JULIE HYMAN: --a week from today.

CHRIS VERSACE: Next week. Next week.

JULIE HYMAN: So is this-- I mean, AI is sort of really knit into the fabric of what Nvidia chips do and what they want them to do. Does it make more sense for a company like this to be up that much or is it also too much?

CHRIS VERSACE: I would say that it's-- going into their earnings, it's probably priced to perfection, which means that they need not only to deliver, they need to beat and raise in order for the stock to-- to move further higher.

BRAD SMITH: AI as a theme, if investors were to even try and position their portfolio or have some type of exposure to AI right now-- from what we've heard even in the mentions over the earnings season, I think Mark Zuckerberg actually did the best of laying this out that it's applications that live on top of the language models that live on top of chips-- is there a strong thematic play that's emerging right now as a subset of AI perhaps?

MEERA PANDIT: I think we need to not put the cart before the horse here because I see a couple of different headwinds from an AI perspective if we bring in the macro story because one, if we think about what AI is going to require, it's going to require businesses to spend more precisely at a time where profits are weakening, companies are trying to batten down the hatches a bit and preserve the profitability. So that's a little bit of a headwind there in terms of how much additional spend can go towards this in the near term.

The other thing I'd say is I think with a lot of these technologies, they actually require more workers up front even if eventually they will save on the labor force. So if we think about the worker's position, the shortage of workers we have, the very specific training that might be required, I think there are some headwinds when we think about the supply of workers and companies' ability to put capital in the near term.

JULIE HYMAN: And sorry to interrupt, Meera, but as you're talking, it occurs to me, do companies also risk putting-- over-resourcing AI at the expense because of the hype, because of the push by investors at the expense of core businesses?

MEERA PANDIT: That's the risk.

JULIE HYMAN: Yeah.

MEERA PANDIT: You don't want businesses to be playing in too many sandboxes at the same time. So I do think that this is the time where businesses need to really focus. And businesses have been good about focusing on wage pressures, cost pressures, higher dollar over the last year or so and making some tough decisions. I don't think company managers should lose that discipline in the face of the newest shiny object.

Now, I think long term, to your point, this is a huge theme that will play out over many years. But we might need to be a little bit patient. Think about, again, some of the ancillary technology required, the inputs required over a longer period of time that can fuel this theme. But I think that the huge run-up in markets solely around AI enthusiasm might be a little bit beyond its skis.

BRAD SMITH: Chris, I saw you leaning in, about to jump on the table.

CHRIS VERSACE: Yeah. Yeah, I was just going to say that we have to kind of differentiate between actual spending and investment on this compared to companies talking about it because in this environment, if you trace it back over the last several weeks, Microsoft shares took off, right, and Google-- kind of Google shares lagged behind really until very recently, especially coming out of their I/O event, where they really talked about how they're incorporating AI not only into Bard, but in other areas, saying that, hey, we are in this game. And again, if you look at that as the model, companies that don't really talk about it, there's going to be this perception that, oh, maybe they're falling behind, and they won't want that.

BRAD SMITH: Are we underestimating what the regulatory framework for AI may look like at this point?

CHRIS VERSACE: My suspicion is yes.

BRAD SMITH: Yeah.

JULIE HYMAN: Yeah, I mean, at the same time, if we have companies that risk not talking about it, like is that-- and falling behind, is that an opportunity for investors? In other words, just because a company isn't talking about it doesn't mean it's not doing it. It doesn't mean-- do you know what I mean?

CHRIS VERSACE: Yeah. Yeah. Yeah. Well, take your point on Apple-- or Brad's point on Apple, right? They are doing it. They are investing in it. But they're not necessarily talking it up. I think the one thing I will say is not for investors, I think for traders.

JULIE HYMAN: Gotcha. An important distinction always to make.

BRAD SMITH: Great to have you both here with us today. We've got Chris Versace, Tematica Research chief investment officer, Chris, great to see you, as always, as well as Meera Pandit, who is the JP Morgan Asset Management global market strategist. We appreciate the time this morning.

CHRIS VERSACE: Thank you.

See the article here:

AI speculators need to 'differentiate between actual spending and investment' and hype: Strategist - Yahoo Finance

Posted in Ai | Comments Off on AI speculators need to ‘differentiate between actual spending and investment’ and hype: Strategist – Yahoo Finance

AI Can Be Both Accurate and Transparent – HBR.org Daily

Posted: at 2:01 am

In 2019, Apples credit card business came under fire for offering a woman one twentieth the credit limit offered to her husband. When she complained, Apple representatives reportedly told her, I dont know why, but I swear were not discriminating. Its just the algorithm.

Today, more and more decisions are made by opaque, unexplainable algorithms like this often with similarly problematic results. From credit approvals to customized product or promotion recommendations to resume readers to fault detection for infrastructure maintenance, organizations across a wide range of industries are investing in automated tools whose decisions are often acted upon with little to no insight into how they are made.

This approach creates real risk. Research has shown that a lack of explainability is both one of executives most common concerns related to AI and has a substantial impact on users trust in and willingness to use AI products not to mention their safety.

And yet, despite the downsides, many organizations continue to invest in these systems, because decision-makers assume that unexplainable algorithms are intrinsically superior to simpler, explainable ones. This perception is known as the accuracy-explainability tradeoff: Tech leaders have historically assumed that the better a human can understand an algorithm, the less accurate it will be.

Specifically, data scientists draw a distinction between so-called black-box and white-box AI models: White-box models typically include just a few simple rules, presented for example as a decision tree or a simple linear model with limited parameters. Because of the small number of rules or parameters, the processes behind these algorithms can typically be understood by humans.

In contrast, black-box models use hundreds or even thousands of decision trees (known as random forests), or billions of parameters (as deep learning models do), to inform their outputs. Cognitive load theory has shown that humans can only comprehend models with up to about seven rules or nodes, making it functionally impossible for observers to explain the decisions made by black-box systems. But does their complexity necessarily make black-box models more accurate?

To explore this question, we conducted a rigorous, large-scale analysis of how black and white-box models performed on a broad array of nearly 100 representative datasets (known as benchmark classification datasets), spanning domains such as pricing, medical diagnosis, bankruptcy prediction, and purchasing behavior. We found that for almost 70% of the datasets, the black box and white box models produced similarly accurate results. In other words, more often than not, there was no tradeoff between accuracy and explainability: A more-explainable model could be used without sacrificing accuracy.

This is consistent with other emerging research exploring the potential of explainable AI models, as well as our own experience working on case studies and projects with companies across diverse industries, geographies, and use cases. For example, it has been repeatedly demonstrated that COMPAS, the complicated black box tool thats widely used in the U.S. justice system for predicting likelihood of future arrests, is no more accurate than a simple predictive model that only looks at age and criminal history. Similarly, a research team created a model to predict likelihood of defaulting on a loan that was simple enough that average banking customers could easily understand it, and the researchers found that their model was less than 1% less accurate than an equivalent black box model (a difference that was within the margin of error).

Of course, there are some cases in which black-box models are still beneficial. But in light of the downsides, our research suggests several steps companies should take before adopting a black-box approach:

As a rule of thumb, white-box models should be used as benchmarks to assess whether black-box models are necessary. Before choosing a type of model, organizations should test both and if the difference in performance is insignificant, the white-box option should be selected.

One of the main factors that will determine whether a black-box model is necessary is the data involved. First, the decision depends on the quality of the data. When data is noisy (i.e., when it includes a lot of erroneous or meaningless information), relatively simple white-box methods tend to be effective. For example, we spoke with analysts at Morgan Stanley who found that for their highly noisy financial datasets, simple trading rules such as buy stock if company is undervalued, underperformed recently, and is not too large worked well.

Second, the type of data also affects the decision. For applications that involve multimedia data such as images, audio, and video, black-box models may offer superior performance. For instance, we worked with a company that was developing AI models to help airport staff predict security risk based on images of air cargo. They found that black-box models had a higher chance of detecting high-risk cargo items that could pose a security threat than equivalent white-box models did. These black-box tools enabled inspection teams to save thousands of hours by focusing more on high-risk cargo, substantially boosting the organizations performance on security metrics. In similarly complex applications such as face-detection for cameras, vision systems in autonomous vehicles, facial recognition, image-based medical diagnostic devices, illegal/toxic content detection, and most recently, generative AI tools like ChatGPT and DALL-E, a black box approach may be advantageous or even the only feasible option.

Transparency is always important to build and maintain trust but its especially critical for particularly sensitive use cases. In situations where a fair decision-making process is of utmost importance to your users, or in which some form of procedural justice is a requirement, it may make sense to prioritize explainability even if your data might otherwise lend itself to a black box approach, or if youve found that less-explainable models are slightly more accurate.

For instance, in domains such as hiring, allocation of organs for transplant, and legal decisions, opting for a simple, rule-based, white-box AI system will reduce risk to both the organization and its users. Many leaders have discovered these risks the hard way: In 2015, Amazon found that its automated candidate screening system was biased against female software developers, while a Dutch AI welfare fraud detection tool was shut down in 2018 after critics decried it as a large and non-transparent black hole.

An organizations choice between white or black-box AI also depends on its own level of AI readiness. For organizations that are less digitally developed, in which employees tend to have less trust in or understanding of AI, it may be best to start with simpler models before progressing to more complex solutions. That typically means implementing a white-box model that everyone can easily understand, and only exploring black-box options once teams have become more accustomed to using these tools.

For example, we worked with a global beverage company that launched a simple white-box AI system to help employees optimize their daily workflows. The system offered limited recommendations, such as which products should be promoted and how much of different products should be restocked. Then, as the organization matured in its use of and trust in AI, managers began to test out whether more complex, black-box alternatives might offer advantages in any of these applications.

In certain domains, explainability might be a legal requirement, not a nice-to-have. For instance, in the U.S., the Equal Credit Opportunity Act requires financial institutions to be able to explain the reasons why credit has been denied to a loan applicant. Similarly, Europes General Data Protection Regulation (GDPR) suggests that employers should be able to explain how candidates data has been used to inform hiring decisions. When organizations are required by law to be able to explain the decisions made by their AI models, white-box models are the only option.

Finally, there are of course contexts in which black-box models are both undeniably more accurate (as was the case in 30% of the datasets we tested in our study) and acceptable with respect to regulatory, organizational, or user-specific concerns. For example, applications such as computer vision for medical diagnoses, fraud detection, and cargo management all benefit greatly from black-box models, and the legal or logistical hurdles they pose tend to be more manageable. In cases like these, if an organization does decide to implement an opaque AI model, it should take steps to address the trust and safety risks associated with a lack of explainability.

In some cases, it is possible to develop an explainable white-box proxy to clarify, in approximate terms, how a black-box model has reached a decision. Even if this explanation isnt fully accurate or complete, it can go a long way to build trust, reduce biases, and increase adoption. In addition, a greater (if imperfect) understanding of the model can help developers further refine it, adding more value to these businesses and their end users.

In other cases, organizations may truly have very limited insight into why a model makes the decisions it does. If an approximate explanation isnt possible, leaders can still prioritize transparency in how they talk about the model both internally and externally, openly acknowledging the risks and working to address them.

***

Ultimately, there is no one-size-fits-all solution to AI implementation. All new technology comes with risks, and the choice of how to balance those risks with the potential rewards will depend on the specific business context and data. But our research demonstrates that in many cases, simple, interpretable AI models perform just as well as black box alternatives without sacrificing the trust of users or allowing hidden biases to drive decisions.

The authors would like to acknowledge Gaurav Jha and Sofie Goethals for their contribution.

Read the rest here:

AI Can Be Both Accurate and Transparent - HBR.org Daily

Posted in Ai | Comments Off on AI Can Be Both Accurate and Transparent – HBR.org Daily

You’re Probably Underestimating AI Chatbots | WIRED – WIRED

Posted: at 2:01 am

In the spring of 2007, I was one of four journalists anointed by Steve Jobs to review the iPhone. This was probably themost anticipated product in the history of tech. What would it be like? Was it a turning point for devices? Looking back atmy review today, I am relieved to say its not an embarrassment: I recognized the devices generational significance. But for all the praise I bestowed upon the iPhone, I failed to anticipate its mind-blowing secondary effects, such as the volcanic melding of hardware, operating system, and apps, or its hypnotic effect on our attention. (I did urge Apple to encourage outside developers to create new uses for the device.) Nor did I suggest we should expect the rise of services like Uber or TikTok or make any prediction that family dinners would turn into communal display-centric trances. Of course, my primary job was to help people decide whether to spend $500, which was super expensive for a phone back then, to buy the damn thing. But reading the review now, one might wonder why I spent time griping about AT&Ts network or the web browsers inability to handle Flash content. Thats like quibbling over what sandals to wear just as a three-story tsunami is about to break.

I am reminded of my failure of foresight when reading about the experiences people are having with recent AI apps, likelarge language model chatbots and AIimage generators. Quite rightfully, people are obsessing about the impact of a sudden cavalcade ofshockingly capable AI systems, though scientists often note that these seemingly rapid breakthroughs have been decades in the making. But as when I first pawed the iPhone in 2007, we risk failing to anticipate the potential trajectories of our AI-infused future by focusing too much on the current versions of products like Microsofts Bing chat, OpenAIs ChatGPT, Anthropics Claude, and Googles Bard.

This fallacy can be clearly observed in what has become a new and popular media genre, best described as prompt-and-pronounce. The modus operandi is to attempt some task formerly limited to humans and then, often disregarding the caveats provided by the inventors, take it to an extreme. The great sports journalist Red Smith once said that writing a column is easyyou just open a vein and bleed. But would-be pundits now promote a bloodless version: You just open a browser and prompt. (Note: this newsletter was produced the old-fashioned way, by opening a vein.)

Typically, prompt-and-pronounce columns involve sitting down with one of these way-early systems and seeing how well it replaces something previously limited to the realm of the human. In a typical example, aNew York Times reporter used ChatGPT toanswer all her work communications for an entire week.The Wall Street Journals product reviewer decided toclone her voice (hey,we did that first!) and appearance using AI to see if her algorithmic doppelgngers could trick people into mistaking the fake for the real thing. There are dozens of similar examples.

Generally, those who stage such stunts come to two conclusions: These models are amazing, but they fall miserably short of what humans do best. The emails fail to pick up workplace nuances. The clones have one foot dragging in the uncanny valley. Most damningly, these text generators make things up when asked for factual information, a phenomenon known as hallucinations' that is the current bane of AI. And its a plain fact that the output of todays models often have a soulless quality.

In one sense, its scarywill our future world be run by flawed mind children, as roboticist Hans Moravec calls our digital successors? But in another sense, the shortcomings are comforting. Sure, AIs can now perform a lot of low-level tasks and are unparalleled at suggesting plausible-looking Disneyland trips and gluten-free dinner party menus, butthe thinking goesthe bots will always need us to make corrections and jazz up the prose.

See original here:

You're Probably Underestimating AI Chatbots | WIRED - WIRED

Posted in Ai | Comments Off on You’re Probably Underestimating AI Chatbots | WIRED – WIRED

AI presents political peril for 2024 with threat to mislead voters – The Associated Press

Posted: at 2:01 am

WASHINGTON (AP) Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election.

The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.

No more.

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.

The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.

Were not prepared for this, warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, its going to have a major impact.

AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence.

Here are a few: Automated robocall messages, in a candidates voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.

What if Elon Musk personally calls you and tells you to vote for a certain candidate? said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. A lot of people would listen. But its not him.

Former President Donald Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Coopers reaction to the CNN town hall this past week with Trump, was created using an AI voice-cloning tool.

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. The online ad, which came after President Joe Biden announced his reelection campaign, and starts with a strange, slightly warped image of Biden and the text What if the weakest president weve ever had was re-elected?

A series of AI-generated images follows: Taiwan under attack; boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic.

An AI-generated look into the countrys possible future if Joe Biden is re-elected in 2024, reads the ads description from the RNC.

The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. Stoyanov predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media as a way to erode trust.

What happens if an international entity a cybercriminal or a nation state impersonates someone. What is the impact? Do we have any recourse? Stoyanov said. Were going to see a lot more misinformation from international sources.

AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries.

AI images appearing to show Trumps mug shot also fooled some social media users even though the former president didnt take one when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin.

Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact.

Some states have offered their own proposals for addressing concerns about deepfakes.

Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other.

Its important that we keep up with the technology, Clarke told The Associated Press. Weve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they dont have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.

Earlier this month, a trade association for political consultants in Washington condemned the use of deepfakes in political advertising, calling them a deception with no place in legitimate, ethical campaigns.

Other forms of artificial intelligence have for years been a feature of political campaigning, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope the most recent innovations will offer some positives in 2024, too.

Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT every single day and encourages his staff to use it, too, as long as any content drafted with the tool is reviewed by human eyes afterward.

Nellis newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails - all typically tedious tasks on campaigns.

The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket, he said.

___

Swenson reported from New York.

___

The Associated Pressreceives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about APs democracy initiative here. The AP is solely responsible for all content.

___

Follow the APs coverage of misinformation at https://apnews.com/hub/misinformation and coverage of artificial intelligence at https://apnews.com/hub/artificial-intelligence

Originally posted here:

AI presents political peril for 2024 with threat to mislead voters - The Associated Press

Posted in Ai | Comments Off on AI presents political peril for 2024 with threat to mislead voters – The Associated Press

We need AI to help us face the challenges of the future – The Guardian

Posted: at 2:01 am

Readers respond to Naomi Kleins article that argued it is delusional to believe AI machines will benefit humanity

Fri 12 May 2023 11.55 EDT

Naomi Kleins article about the dangers of generative AI makes many valid points about the economic and social consequences of the new technology (AI machines arent hallucinating. But their makers are, 8 May). But her choice of language about how to describe the mistakes that the new AI makes seems to suggest she is committed mainly to providing an ideological interpretation of the new technology.

Saying that mistakes are the results of glitches in the code rather than the tech hallucinating suggests the simulation is a simple one, involving a kind of power of the false rather than a more complex one that allows the possibility of some form of fabulation. This is important because it means that the technology cant be seen simply as a control technology, like nuclear fusion or self-driving cars, but instead indicates a switch to an adaptive form of technology, ie, ones that are based on adapting what is already out there rather than trying to reinvent what exists, as in some form of innovation.

Obviously, climate change will require more of the adaptive kinds of technology, like reusable space rockets and wind farms, because control technologies are very resource heavy and tend to cause a lot of collateral damage.Terry PriceLondon

Naomi Klein is right to voice scepticism about the claims made for generative AI. As its development coincides with endgame capitalism, a minimum requirement for its effective governance must be that those responsible for its programming are truly representative, not only of humanity as a whole but the living planet.

Rather than a group of white, male, wealthy individuals developing AI in their image, we need to ensure that indigenous wisdom, the aspirations of future generations drawn from all continents and those able to identify the impact of potential decisions and actions on our ecosystems all need to participate in the design of these AI developments. Without such input, all such AI will do is exacerbate our demise: with these contributions, it may yet avert it. Surely this is an issue that is too important to be left to Silicon Valley to self-determine.Dave HunterBristol

The real danger of AI systems arises from the fact that these systems have no actual intelligence and so cannot distinguish whether the results they produce are correct or not. ChatGPT produces intelligent results in the midst of a whole lot of other results which, to our human intelligence, are simply ridiculous. This doesnt matter too much because we simply laugh at and discard the ridiculous results.

But when these AI systems are controlling cars and planes, where the ridiculous results are a danger to life and cant just be discarded, the consequences could be catastrophic. The artificial neural networks producing AI are bandied about as emulators of the brain. But in spite of decades of dedicated research, neural networks have just 10 to 1,000 neurons, whereas the human brain has 86bn of them.

No wonder that an AI system has no way of knowing whether it has produced an intelligent (by human standards) result.Charles RoweWantage, Oxfordshire

It is understandable that there is concern over the effect that AI will have on our future, but I am equally concerned about the damage that humans will do if were left in charge (Why the godfather of AI fears for humanity, 5 May).

Would an AI system really have dealt with the Covid pandemic worse than Boris Johnson? Would it have allowed our planet to get so close to the precipice of climate catastrophe? Geoffrey Hinton believes that once AI is more intelligent than us, it will inevitably take charge, and perhaps he is right to be concerned. On the other hand, it might be just what we need.Ben ChesterStroud, Gloucestershire

Have an opinion on anything youve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Continued here:

We need AI to help us face the challenges of the future - The Guardian

Posted in Ai | Comments Off on We need AI to help us face the challenges of the future – The Guardian

End Of Googles Dominance? Stock Gets Rare Analyst Downgrade Over AI Fears – Forbes

Posted: at 2:01 am

Updated May 15, 2023, 04:03pm EDT

Loop Capital downgraded its stock rating for Google parent Alphabet from a buy to a hold in a buzzy Monday morning note to clients, throwing some cold water on last weeks boundless optimism for the Google parents future in the budding artificial intelligence sphere.

The boom in AI chatbots could cause "behavioral changes" that make users less likely to rely on traditional search engines, placing a "ceiling" on Alphabet's valuation, Loop Capital analysts Rob Sanderson and Alan Gould wrote in a Monday note.

The strategists set a $118 target for Alphabet shares, a tick above the stocks $117 price Monday and well below the $130 average analyst target for the stock, according to FactSet.

Its the first downgrade for Alphabet since at least March 3, per Factset.

Shares of Alphabet tumbled nearly 1% in Monday trading, moving against the tech-heavy Nasdaqs slight gains, though the stock is still up more than 10% over the past week as investors cheered on the companys Wednesday presentation outlining the incorporation of AI into various phases of its business.

Loops skepticism on Alphabet comes not from concerns about the oft-discussed gains made by Microsoft in the AI space but rather about how generative AI chatbots like Microsoft-backed ChatGPT call into question Googles long-standing status as the primary gateway to the web, calling the shift among users a competitive force against its dominance in connecting users to information.

Even if AI caps Alphabets upside, it remains one of the largest companies on earth, with its $1.4 trillion market capitalization trailing only those of Apple, Microsoft and Saudi Aramco.

Google has unmatched AI competencies and will be a major beneficiary of AI adoption over the long-term, Sanderson and Gould clarified. That echoes the bullish sentiments expressed by many analysts following Googles I/O developer conference; firms such as Bank of America, Goldman Sachs, UBS and JPMorgan reiterated their buy ratings for Alphabet in notes late last week.

Sanderson and Gould published another note early Monday upgrading Metas stock from a hold to a buy, setting a $320 price target for the Facebook and Instagram parent, indicating 37% upside in what would be the stocks highest price since January 2022. Shifting macroeconomic tides and faith in Instagram Reels and Metas AI push in advertising all serve as tailwinds for the stock, according to the strategists. Meta shares rallied more than 2% in Monday trading.

Google Insiders Are $9 Billion Richer After AI-Fueled Stock Rally (Forbes)

Google Adding AI To Search Engine For Some UsersClosing In On Microsoft's AI Push (Forbes)

Googles Peacetime CEO Sundar Pichai Faces Criticism As The AI War Heats Up (Forbes)

I'm a New Jersey-based Senior Reporter on our news desk. I graduated in 2021 from Duke University, where I majored in Economics and served as sports editor for The Chronicle, Duke's student newspaper. Send tips at dsaul@forbes.com.

Go here to read the rest:

End Of Googles Dominance? Stock Gets Rare Analyst Downgrade Over AI Fears - Forbes

Posted in Ai | Comments Off on End Of Googles Dominance? Stock Gets Rare Analyst Downgrade Over AI Fears – Forbes