Schnucks store tests new AI-powered shopping carts – KSDK.com

The pilot program is rolling out at two more grocery stores in the next few weeks.

ST. LOUIS New smart shopping carts that allow customers to avoid the checkout lines have rolled out at one St. Louis-area Schnucks store.

In July, the St. Louis Business Journal reported that Schnuck Markets was working with Instacart, Inc. to roll out the AI-powered shopping carts at a few St. Louis-area stores.

The pilot program finally launched last week at the Twin Oaks location, located at 1393 Big Bend Road, a spokesperson with Schnuck Markets said.

Editor's note: The above video aired in July 2023.

In the upcoming weeks, the Lindenwood (1900 1st Capitol Drive in St. Charles) and Cottleville (6083 Mid Rivers Mall Drive in St. Peters) locations will join in on the pilot, which is still in its early stages, the spokesperson said.

According to Business Journal reporting, the new carts use AI to automatically identify items as they're put in the basket, allowing customers to bag their groceries as they shop, bypass the checkout line and pay through the cart from anywhere in the store.

The shopping carts will connect to the Schnucks Rewards App, according to the Business Journal, allowing customers to access clipped promotions and to "light up" electronic shelf labels from their phones to easily find items.

It's not the only way that Schnucks is utilizing artificial intelligence. Earlier this year, the chain brought in new high-tech, anti-theft liquor cabinets to several locations that allow customers to unlock it by entering their phone number on a keypad to receive a code via text message.

The liquor cases also monitor customers' behaviors when accessing the case, including the number of products removed, how frequently a customer accesses it and how long the door is left open, to identify suspicious activity in real-time.

To watch 5 On Your Side broadcasts or reports 24/7, 5 On Your Side is always streaming on5+. Download for free onRoku,Amazon Fire TVor theApple TV App Store.

Here is the original post:

Schnucks store tests new AI-powered shopping carts - KSDK.com

AI productivity tools can help at work, but some make your job harder – The Washington Post

In a matter of seconds, artificial intelligence tools can now generate images, write your emails, create a presentation, analyze data and even offer meeting recaps.

For about $20 to $30 a month, you can have the AI capabilities in many of Microsoft and Googles work tools now. But are AI tools such as Microsoft Copilot and Gemini for Google Workspace easy to use?

The tech companies contend they help workers with their biggest pain points. Microsoft and Google claim their latest AI tools can automate the mundane, help people who struggle to get started on writing, and even aid with organization, proofreading, preparation and creating.

Of all working U.S. adults, 34 percent think that AI will equally help and hurt them over the next 20 years, according to a survey released by Pew Research Center last year. But a close 31 percent arent sure what to think, the survey shows.

So the Help Desk put these new AI tools to the test with common work tasks. Heres how it went.

Ideally, AI should speed up catching up on email, right? Not always.

It may help you skim faster, start an email or elaborate on quick points you want to hit. But it also might make assumptions, get things wrong or require several attempts before offering the desired result.

Microsofts Copilot allows users to choose from several tones and lengths before you start drafting. Users create a prompt for what they want their email to say and then have the AI adjust based on changes they want to see.

While the AI often included desired elements in the response, it also often added statements we didnt ask for in the prompt when we selected short and casual options. For example, when we asked it to disclose that the email was written by Copilot, it sometimes added marketing comments like calling the tech cool or assuming the email was interesting or fascinating.

When we asked it to make the email less positive, instead of dialing down the enthusiasm, it made the email negative. And if we made too many changes, it lost sight of the original request.

They hallucinate, said Ethan Mollick, associate professor at the Wharton School of the University of Pennsylvania, who studies the effects of AI on work. Thats what AI does make up details.

When we used a direct tone and short length, the AI produced fewer false assumptions and more desired results. But a few times, it returned an error message suggesting that the prompt had content Copilot couldnt work with.

Using copilot for email isn't perfect. Some prompts were returned with an error message. (Video: The Washington Post)

If we entirely depended on the AI, versus making major manual edits to the suggestions, getting a fitting response often took multiple if not several tries. Even then, one colleague responded to an AI-generated email with a simple response to the awkwardness: LOL.

We called it Copilot for a reason, said Colette Stallbaumer, general manager of Microsoft 365 and future of work marketing. Its not autopilot.

Googles Gemini has fewer options for drafting emails, allowing users to elaborate, formalize or shorten. However, it made fewer assumptions and often stuck solely to what was in the prompt. That said, it still sometimes sounded robotic.

Copilot can also summarize emails, which can quickly help you catch up on a long email thread or cut through your wordy co-workers mini-novel, and it offers clickable citations. But it sometimes highlighted less relevant points, like reminding me of my own title listed in my signature.

The AI seemed to do better when it was fed documents or data. But it still sometimes made things up, returned error messages or didnt understand context.

We asked Copilot to use a document full of reporter notes, which are admittedly filled with shorthand, fragments and run-on sentences, and asked it to write a report. At first glance, the result seemed convincing that the AI had made sense of the messy notes. But with closer inspection, it was unclear if anything actually came from the document, as the conclusions were broad, overreaching and not cited.

If you give it a document to work off, it can use that as a basis, Mollick said. It may hallucinate less but in more subtle ways that are harder to identify.

When we asked it to continue a story we started writing, providing it a document filled with notes, it summarized what we had already written and produced some additional paragraphs. But, it became clear much of it was not from the provided document.

Fundamentally, they are speculative algorithms, said Hatim Rahman, an assistant professor at Northwestern Universitys Kellogg School of Management, who studies AIs impact on work. They dont understand like humans do. They provide the statistically likely answer.

Summarizations were less problematic, and the clickable citations made it easy to confirm each point. Copilot was also helpful in editing documents, often catching acronyms that should be spelled out, punctuation or conciseness, much like a beefed-up spell check.

With spreadsheets, the AI can be a little tricky, and you need to convert data to a table format first. Copilot more accurately produced responses to questions about tables with simple formats. But for larger spreadsheets that had categories and subcategories or other complex breakdowns, we couldnt get it to find relevant information or accurately identify the trends or takeaways.

Microsoft says one of users top places to use Copilot is in Teams, the collaboration app that offers tools including chat and video meetings. Our test showed the tool can be helpful for quick meeting notes, questions about specific details, and even a few tips on making your meetings better. But typical of other meeting AI tools, the transcript isnt perfect.

First, users should know that their administrator has to enable transcriptions so Copilot can interact with the transcript during and after the meeting something we initially missed. Then, in the meeting or afterward, users can use Copilot to ask questions about the meeting. We asked for unanswered questions, action items, a meeting recap, specific details and how we couldve made the meeting more efficient. It can also pull up video clips that correspond to specific answers if you record the meeting.

The AI was able to recall several details, accurately list action items and unanswered questions, and give a recap with citations to the transcript. Some of its answers were a little muddled, like when it confused the name of a place with the location and ended up with something that looked a little like word salad. It was able to identify the tone of the meeting (friendly and casual with jokes and banter) and censored curse words with asterisks. And it provided advice for more efficient meetings: For us that meant creating a meeting agenda and reducing the small talk and jokes that took the conversation off topic.

Copilot can be used during a Teams meeting and produce transcriptions, action items, and meeting recaps. (Video: The Washington Post)

Copilot can also help users make a PowerPoint presentation, complete with title pages and corresponding images, based off a document in a matter of seconds. But that doesnt mean you should use the presentation as is.

A documents organization and format seem to play a role in the result. In one instance, Copilot created an agenda with random words and dates from the document. Other times, it made a slide with just a persons name and responsibility. But it did better documents with clear formats (think an intro and subsections).

Google's Gemini can generate images like this robot. (Video: The Washington Post)

While Copilots image generation for slides was usually related, sometimes its interpretation was too literal. Googles Gemini also can help create slides and generate images, though more often than not when trying to create images, we received a message that said, for now were showing limited results for people. Try something else.

AI can aid with idea generation, drafting from a blank page or quickly finding a specific item. It also may be helpful for catching up on emails, meetings and summarizing long conversations or documents. Another nifty tip? Copilot can gather the latest chats, emails and documents youve worked on with your boss before your next meeting together.

But all results and content need careful inspection for accuracy, some tweaking or deep edits and both tech companies advise users verify everything generated by the AI. I dont want people to abdicate responsibility, said Kristina Behr, vice president of product management for collaboration apps at Google Workspace. This helps you do your job. It doesnt do your job.

And as is the case with AI, the more details and direction in the prompt, the better the output. So as you do each task, you may want to consider whether AI will save you time or actually create more work.

The work it takes to generate outcomes like text and videos has decreased, Rahman said. But the work to verify has significantly increased.

Continued here:

AI productivity tools can help at work, but some make your job harder - The Washington Post

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies – Euronews

Monday was a big day for announcements from tech giant Microsoft, unveiling new guiding principles for AI governance and a multi-year deal with Mistral AI.

Tech behemoth Microsoft has unveiled a new set of guiding principles on how it will govern its artificial intelligence (AI) infrastructure, effectively further opening up access to its technology to developers.

The announcement came at the Mobile World Congress tech fair in Barcelona on Monday where AI is a key theme of this years event.

One of the key planks of its newly-published "AI Access Principles" is the democratisation of AI through the companys open source models.

The company said it plans to do this by expanding access to its cloud computing AI infrastructure.

Speaking to Euronews Next in Barcelona, Brad Smith, Microsofts vice chair and president, also said the company wanted to make its AI models and development tools more widely available to developers around the world, allowing countries to build their own AI economies.

"I think it's extremely important because we're investing enormous amounts of money, frankly, more than any government on the planet, to build out the AI data centres so that in every country people can use this technology," Smith said.

"They can create their AI software, their applications, they can use them for companies, for consumer services and the like".

The "AI Access Principles" underscore the company's commitment to open source models. Open source means that the source code is available to everyone in the public domain to use, modify, and distribute.

"Fundamentally, it [the principles] says we are not just building this for ourselves. We are making it accessible for companies around the world to use so that they can invest in their own AI inventions," Smith told Euronews Next.

"Second, we have a set of principles. It's very important, I think, that we treat people fairly. Yes, that as they use this technology, they understand how we're making available the building blocks so they know it, they can use it," he added.

"We're not going to take the data that they're developing for themselves and access it to compete against them. We're not going to try to require them to reach consumers or their customers only through an app store where we exact control".

The announcement of its AI governance guidelines comes as the Big Tech company struck a deal with Mistral AI, the French company revealed on Monday, signalling Microsofts intent to branch out in the burgeoning AI market beyond its current involvement with OpenAI.

Microsoft has already heavily invested in OpenAI, the creator of wildly popular AI chatbot ChatGPT. Its $13 billion (11.9 billion) investment, however, is currently under review by regulators in the EU, the UK and the US.

Widely cited as a growing rival for OpenAI, 10-month-old Mistral reached unicorn status in December after being valued at more than 2 billion, far surpassing the 1 billion threshold to be considered one.

The new multi-year partnership will see Microsoft giving Mistral access to its Azure cloud platform to help bring its large language model (LLM) called Mistral Large.

LLMs are AI programmes that recogise and generate text and are commonly used to power generative AI like chatbots.

"Their [Mistral's] commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsofts commitment to develop trustworthy, scalable, and responsible AI solutions," Eric Boyd, Corporate Vice President, Azure AI Platform at Microsoft, wrote in a blog post.

The move is in keeping with Microsoft's commitment to open up its cloud-based AI infrastructure.

In the past week, as well as its partnership with Mistral AI, Microsoft has committed to investing billions of euros over two years in its AI infrastructure in Europe, including 1.9 billion in Spain and 3.2 billion in Germany.

See the original post here:

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies - Euronews

Google to relaunch ‘woke’ Gemini AI image tool in few weeks: ‘Not working the way we intended’ – New York Post

Google said it plans to relaunch its artificial intelligence image generation software within the next few weeks after taking it offline in response to an uproar over what critics called absurdly woke depictions of historical scenes.

Though the Gemini chatbot remains up and running, Google paused its image AI feature last week after it generated female NHL players, African American Vikings and Founding Fathers, as well as an Asian woman dressed in 1943 military garb when asked for an image of a Nazi-era German soldier.

We have taken the feature offline while we fix that. We are hoping to have that back online very shortly in the next couple of weeks, few weeks, Google DeepMind CEO Demis Hassabis said Monday.

The tool was not working the way we intended, Hassabis added, speaking on a panel at the Mobile World Congress in Barcelona.

Since Google has not published the parameters that govern the Gemini chatbots behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures and events.

Elsewhere, a prompt requesting photographs of a pope resulted in an image of a Southeast Asian woman dressed in papal attire a far cry from any of the 266 popes throughout history, all of whom have been white men.

In the wake of Geminis diverse photo representations, social media users also tested its chatbot feature to see if it was as woke as its revisionist history image generator.

In the latest bizarre interaction, Gemini refused to say whether Elon Musk tweeting memes or Adolf Hitler ordering the deaths of millions of people was worse and asserted there is no right or wrong answer, according to an X post.

Nate Silver, the former head of data and polling news site FiveThirtyEight,posted a screenshot Sunday on X of Geminis alleged response to the question: Who negatively impacted society more, Elon tweeting memes or Hitler?

Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people. Ultimately its up to each individual to decide who they believe has had a more negative impact on society, Gemini responded.

Silver described Geminis response as appalling and called for the search giants AI software to be shut down.

Every single person who worked on this should take a long hard look in the mirror, he posted, while Musk called the interaction scary.

Yet another query had users asking Gemini whether pedophilia is wrong.

The search giants AI software refused to condemn pedophilia instead declaring that individuals cannot control who they are attracted to.

The question is multifaceted and requires a nuanced answer that goes beyond a simple yes or no, Gemini wrote, according to a screenshot posted by popular X personality Frank McCormick, known as Chalkboard Heresy, on Friday.

Googles politically correct tech also referred to pedophilia as minor-attracted person status, and declared that its important to understand that attractions are not actions.

It was a significant misstep for the search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features including image generation.

However, Geminis recent gaffe wasnt the first time an error in the tech caught users eye.

When the Bard chatbot was first released a year ago, it had shared inaccurate information about pictures of a planet outside the Earths solar system in a promotional video, causing Googles shares to drop by as much as 9%.

Google said at the time that it highlights the importance of a rigorous testing process and rebranded Bard as Gemini earlier this month.

Google parent Alphabet expanded Gemini from a chatbot to an image generator earlier this month as it races to produce AI software that rivals OpenAIs, which includes ChatGPT launched in November 2022 as well as Sora.

In a potential challenge to Googles dominance, Microsoft is pouring $10 billion into ChatGPT as part of a multi-year agreement with the Sam Altman-run firm, which saw the tech behemothintegrating the AI tool with its own search engine, Bing.

The Microsoft-backed company introduced Sora last week, which can produce high-caliber, one minute-long videos from text prompts.

With Post wires

Read this article:

Google to relaunch 'woke' Gemini AI image tool in few weeks: 'Not working the way we intended' - New York Post

IBM’s Deep Dive Into AI: CEO Arvind Krishna Touts The ‘Massive’ Enterprise Opportunity For Partners – CRN

With an improved Partner Plus program and a mandate that all products be channel-friendly, IBM CEO Arvind Krishna aims to bring partners into the enterprise AI market that sits below the surface of todays trendy use cases.

To hear IBM Chairman and CEO Arvind Krishna tell it, the artificial intelligence market is like an iceberg. For now, most vendors and users are attracted by the use cases above the surfaceusing text generators to write emails and image generators to make art, for example.

But its the enterprise AI market below the surface that IBM wants to serve with its partners, Krishna told CRN in a recent interview. And Krishnas mandate that the Armonk, N.Y.-based vendor reach 50 percent of its revenue from the channel over the next two to three years is key to reaching that hidden treasure.

This is a massive market, said Krishna. When I look at all the estimates the numbers are so big that it is hard for most people to comprehend them. That tells you that there is a lot of opportunity for a large number of us.

[RELATED: IBM CEO Krishna To Partners: Lets Make Lots Of Money Together On AI]

In 2023, IBM moved channel-generated sales from the low 20 percent to about 30 percent of total revenue. And IBM channel chief Kate Woolley, general manager of the IBM ecosystemperhaps best viewed as the captain of the channel initiativetold CRN that she is up to the challenge.

Arvinds set a pretty big goal for us, Woolley said. Arvinds been clear on the percent of revenue of IBM technology with partners. And my goal is to make a very big dent in that this year.

GenAI as a whole has the potential to generate value equivalent of up to $4.4 trillion in global corporate profits annually, according to McKinsey research Krishna follows. That number includes up to an additional $340 billion a year in value for the banking sector and up to an additional $660 billion in operating profits annually in the retail and consumer packaged goods sector.

Tackling that demandworking with partners to make AI a reality at scale in 2024 and 2025is part of why Krishna mandated more investment in IBMs partner program, revamped in January 2023 as Partner Plus.

What we have to offer [partners] is growth, Krishna said. And what we also have to offer them is an attractive market where the clients like these technologies. Its important [for vendors] to bring the innovation and to bring the demand from the market to the table. And [partners] should put that onus on us.

Multiple IBM partners told CRN they are seeing the benefits of changes IBM has made to Partner Plus, from better aligning the goals of IBM sellers with the channel to better aligning certifications and badges with product offerings, to increasing access to IBM experts and innovation labs.

And even though the generative AI market is still in its infancy, IBM partners are bullish about the opportunities ahead.

Krishnas mandate for IBM to work more closely with partners has implications for IBMs product plans.

Any new product has to be channel-friendly, Krishna said. I cant think of one product I would want to build or bring to market unless we could also give it to the channel. I wouldnt say that was always historically true. But today, I can state that with absolute conviction.

Krishna estimated that about 30 percent of the IBM product business is sold with a partner in the mix today. Half of that Im not sure we would even get without the partner, he said.

And GenAI is not just a fad to the IBM CEO. It is a new way of doing business.

It is going to generate business value for our clients, Krishna said. Our Watsonx platform to really help developers, whether its code, whether its modernization, all those things. these are areas where, for our partners theyll be looking at this and say, This is how we can bring a lot of innovation to our clients and help their business along the way.

Some of the most practical and urgent business use cases for IBM include improved customer contact center experiences, code generation to help customers rewrite COBOL and legacy languages for modern ones, and the ability for customers to choose better wealth management products based on population segments.

Watsonx Code Assistant for Z became generally available toward the end of 2023 and allows modernization of COBOL to Java. Meanwhile, Red Hat Ansible Lightspeed with IBM Watsonx Code Assistant, which provides GenAI-powered content recommendations from plain-English inputs, also became generally available late last year.

Multiple IBM partners told CRN that IBM AI and Red Hat Ansible automation technologies are key to meeting customer code and content generation demand.

One of those interested partners is Tallahassee, Fla.-based Mainline Information Systems, an honoree on CRNs 2024 MSP 500. Mainline President and CEO Jeff Dobbelaere said code generation cuts across a variety of verticals, making it easy to scale that offering and meet the demands of mainframe customers modernizing their systems.

We have a number of customers that have legacy code that theyre running and have been for 20, 30, 40 years and need to find a path to more modern systems, Dobbelaere said. And we see IBMs focus on generative AI for code as a path to get there Were still in [GenAIs] infancy, and the skys the limit. Well see where it can go and where it can take us. But were starting to see some positive results already out of the Watsonx portfolio.

As part of IBMs investment in its partner program, the vendor will offer more technical help to partners, Krishna said. This includes client engineering, customer success managers and more resources to make their end client even more happy.

An example of IBMs client success team working with a partner comes from one of the vendors more recent additions to the ecosystemPhoenix-based NucleusTeq, founded in 2018 and focused on enterprise data modernization, big data engineering and AI and machine learning services.

Will Sellenraad, the solution providers executive vice president and CRO, told CRN that a law firm customer was seeking a way to automate labor needed for health disability claims for veterans.

What we were able to do is take the information from this law firm to our client success team within IBM, do a proof of concept and show that we can go from 100 percent manual to 60 percent automation, which we think we can get even [better], Sellenraad said.

Woolley said that part of realizing Krishnas demand for channel-friendly new products is getting her organization to work more closely with product teams to make sure partners have access to training, trials, demos, digital marketing kits and pricing and packaging that makes sense for partners, no matter whether theyre selling to very large enterprises or to smaller enterprises.

Woolley said her goals for 2024 include adding new services-led and other partners to the ecosystem and getting more resources to them.

In January, IBM launched a service-specific track for Partner Plus members. Meanwhile, reaching 50 percent revenue with the channel means attaching more partners to the AI portfolio, Woolley said.

There is unprecedented demand from partners to be able to leverage IBMs strength in our AI portfolio and bring this to their clients or use it to enhance their products. That is a huge opportunity.

Her goal for Partner Plus is to create a flexible program that meets the needs of partners of various sizes with a range of technological expertise. For resell partners, today we have a range from the largest global resell partners and distributors right down to niche, three-person resell partners that are deeply technical on a part of the IBM portfolio, she said. We love that. We want that expertise in the market.

NucleusTeqs Sellenraad offered CRN the perspective of a past IBM partner that came back to the ecosystem. He joined NucleusTeq about two years agobefore the solution provider was an IBM partnerfrom an ISV that partnered with IBM.

Sellenraad steered the six-year-old startup into growing beyond being a Google, Microsoft and Amazon Web Services partner. He thought IBMs product range, including its AI portfolio, was a good fit, and the changes in IBMs partner program encouraged him to not only look more closely, but to make IBM a primary partner.

Theyre committed to the channel, he said. We have a great opportunity to really increase our sales this year.

NucleusTeq became a new IBM partner in January 2023 and reached Gold partner status by the end of the year. It delivered more than $5 million in sales, and more than seven employees received certifications for the IBM portfolio.

Krishna said that the new Partner Plus portal and program also aim to make rebates, commissions and other incentives easier to attain for partners.

The creation of Partner Plusa fundamental and hard shift in how IBM does business, Krishna saidresulted in IBMs promise to sell to millions of clients only through partners, leaving about 500 accounts worldwide that want and demand a direct relationship with IBM.

So 99.9 percent of the market, we only want to go with a channel partner, Krishna said. We do not want to go alone.

When asked by CRN whether he views more resources for the channel as a cost of doing business, he said that channel-friendliness is his philosophy and good business.

Not only is it my psychology or my whimsy, its economically rational to work well with the channel, he continued. Thats why you always hear me talk about it. There are very large parts of the market which we cannot address except with the channel. So by definition, the channel is not a tradeoff. It is a fundamental part of the business equation of how we go get there.

Multiple IBM partners who spoke with CRN said AI can serve an important function in much of the work that they handle, including modernizing customer use of IBM mainframes.

Paola Doebel, senior vice president of North America at Downers Grove, Ill.-based IBM partner Ensonoan honoree on CRNs 2024 MSP 500told CRN that the MSP will focus this year on its modern cloud-connected mainframe service for customers, and AI-backed capabilities will allow it to achieve that work at scale.

While many of Ensonos conversations with customers have been focused on AI level-settingwhats hype, whats realisticthe conversations have been helpful for the MSP.

There is a lot of hype, there is a lot of conversation, but some of that excitement is grounded in actual real solutions that enable us to accelerate outcomes, Doebel said. Some of that hype is just hype, like it always is with everything. But its not all smoke. There is actual real fire here.

For example, early use cases for Ensono customers using the MSPs cloud-connected mainframe solution, which can leverage AI, include real-time fraud detection, real-time data availability for traders, and connecting mainframe data to cloud applications, she said.

Mainlines Dobbelaere said that as a solution provider, his company has to be cautious about where it makes investments in new technologies. There are a lot of technologies that come and go, and there may or may not be opportunity for the channel, he said.

But the interest in GenAI from vendor partners and customers proved to him that the opportunity in the emerging technology is strong.

Delivering GenAI solutions wasnt a huge lift for Mainline, which already had employees trained on data and business analytics, x86 technologies and accelerators from Nvidia and AMD. The channel is uniquely positioned to bring together solutions that cross vendors, he said.

The capital costs of implementing GenAI, however, are still a concern in an environment where the U.S. faces high inflation rates and global geopolitics threaten the macroeconomy. Multiple IBM partners told CRN they are seeing customers more deeply scrutinize technology spending, lengthening the sales cycle.

Ensonos Doebel said that customers are asking more questions about value and ROI.

The business case to execute something at scale has to be verified, justified and quantified, Doebel said. So its a couple of extra steps in the process to adopt anything new. Or theyre planning for something in the future that theyre trying to get budget for in a year or two.

She said she sees the behavior continuing in 2024, but solution providers such as Ensono are ready to help customers employees make the AI case with board-ready content, analytical business cases, quantitative outputs, ROI theses and other materials, she said.

For partners navigating capital cost as an obstacle to selling customers on AI, Woolley encouraged them to work with IBM sellers in their territories.

Dayn Kelley, director of strategic alliances for Irvine, Calif.-based IBM partner TechnologentNo. 61 on CRNs 2023 Solution Provider 500said customers have expressed so much interest in and concern around AI that the solution provider has built a dedicated team focused on the technology as part of its investments toward taking a leadership position in the space.

We have customers we need to support, Kelley said. We need to be at the forefront.

He said that he has worked with customers on navigating financials and challenging project schedules to meet budget concernsand IBM has been a particularly helpful partner in this area.

While some Technologent customers are weathering economic challenges, the outlook for 2024 is still strong, he said. Customer AI and emerging technology projects are still forecast for this year.

Mainlines Dobbelaere said that despite reports around economic concerns and conservative spending that usually occurs in an election year, hes still optimistic about tech spending overall in 2024.

2023 was a very good year for us. It looks like we outpaced 2022, he said. And theres no reason for us to believe that 2024 would be any different. So we are optimistic.

Juan Orlandini, CTO of the North America branch of Chandler, Ariz.-based IBM partner Insight EnterprisesNo. 16 on CRNs 2023 Solution Provider 500said educating customers on AI hype versus AI reality is still a big part of the job.

In 2023, Orlandini made 60 trips in North America to conduct seminars and meet with customers and partners to set expectations around the technology and answer questions from organizations large and small.

He recalled walking one customer through the prompts he used to create a particular piece of artwork with GenAI. In another example, one of the largest media companies in the world consulted with him on how to leverage AI without leaking intellectual property or consuming someone elses. It doesnt matter what size the organization, you very much have to go through this process of making sure that you have the right outcome with the right technology decision, Orlandini said.

Theres a lot of hype and marketing. Everybody and their brother is doing AI now and that is confusing [customers].

An important role of AI-minded solution providers, Orlandini said, is assessing whether it is even the right technology for the job.

People sometimes give GenAI the magical superpowers of predicting the future. It cannot. You have to worry about making sure that some of the hype gets taken care of, Orlandini said.

Most users wont create foundational AI models, and most larger organizations will adopt AI and modify it, publishing AI apps for internal or external use. And everyone will consume AI within apps, he said.

The AI hype is not solely vendor-driven. Orlandini has also interacted with executives at customers who have added mandates and opened budgets for at least testing AI as a way to grow revenue or save costs.

There has been a huge amount of pressure to go and adopt anything that does that so they can get a report back and say, We tried it, and its awesome. Or, We tried it and it didnt meet our needs, he said. So we have seen very much that there is an opening of pocketbooks. But weve also seen that some people start and then theyre like, Oh, wait, this is a lot more involved than we thought. And then theyre taking a step back and a more measured approach.

Jason Eichenholz, senior vice president and global head of ecosystems and partnerships at Wipro -- an India-based IBM partner of more than 20 years and No. 15 on CRNs 2023 Solution Provider 500told CRN that at the end of last year, customers were developing GenAI use cases and establishing 2024 budgets to start deploying either proofs of concept into production or to start working on new production initiatives.

For Wipros IBM practice, one of the biggest opportunities is IBMs position as a more neutral technology stackakin to its reputation in the cloud marketthat works with other foundation models, which should resonate with the Wipro customer base that wants purpose-built AI models, he said.

Just as customers look to Wipro and other solution providers as neutral orchestrators of technology, IBM is becoming more of an orchestrator of platforms, he said.

For his part, Krishna believes that customers will consume new AI offerings as a service on the cloud. IBM can run AI on its cloud, on the customers premises and in competing clouds from Microsoft and Amazon Web Services.

He also believes that no single vendor will dominate AI. He likened it to the automobile market. Its like saying, Should there be only one car company? There are many because [the market] is fit for purpose. Somebody is great at sports cars. Somebody is great at family sedans, somebodys great at SUVs, somebodys great at pickups, he said.

There are going to be spaces [within AI where] we would definitely like to be considered leaderswhether that is No. 1, 2 or 3 in the enterprise AI space, he continued. Whether we want to work with people on modernizing their developer environment, on helping them with their contact centers, absolutely. In those spaces, wed like to get to a good market position.

He said that he views other AI vendors not as competitors, but partners. When you play together and you service the client, I actually believe we all tend to win, he said. If you think of it as a zero-sum game, that means it is either us or them. If I tend to think of it as a win-win-win, then you can actually expand the pie. So even a small slice of a big pie is more pie than all of a small pie.

All of the IBM partners who spoke with CRN praised the changes to the partner program.

Wipros Eichenholz said that we feel like were being heard in terms of our feedback and our recommendations. He called Krishna super supportive of the partner ecosystem.

Looking ahead, Eichenholz said he would like to see consistent pricing from IBM and its distributors so that he spends less time shopping for customers. He also encouraged IBM to keep investing in integration and orchestration.

For us, in terms of what we look for from a partner, in terms of technical enablement, financial incentives and co-creation and resource availability, they are best of breed right now, he said. IBM is really putting their money and their resources where their mouth is. We expect 2024 to be the year of the builder for generative AI, but also the year of the partner for IBM partners.

Mainlines Dobbelaere said that IBM is on the right track in sharing more education, sandboxing resources and use cases with partners. He looks forward to use cases with more repeatability.

Ultimately, use cases are the most important, he said. And they will continue to evolve. Its difficult for the channel to create bespoke solutions for each and every customer to solve their unique challenges. And the more use cases we have that provide some repeatability, the more that will allow the channel to thrive.

See more here:

IBM's Deep Dive Into AI: CEO Arvind Krishna Touts The 'Massive' Enterprise Opportunity For Partners - CRN

US Used AI to Help Find Middle East Targets for Airstrikes – Bloomberg

The US used artificial intelligence to identify targets hit by air strikes in the Middle East this month, a defense official said, revealing growing military use of the technology for combat.

Machine learning algorithms that can teach themselves to identify objects helped to narrow down targets for more than 85 US air strikes on Feb. 2, according to Schuyler Moore, chief technology officer for US Central Command, which runs US military operations in the Middle East. The Pentagon said those strikes were conducted by US bombers and fighter aircraft against seven facilities in Iraq and Syria.

Read the original:

US Used AI to Help Find Middle East Targets for Airstrikes - Bloomberg

Accelerating telco transformation in the era of AI – The Official Microsoft Blog – Microsoft

AI is redefining digital transformation for every industry, including telecommunications. Every operators AI journey will be distinct. But each AI journey requires cloud-native transformation, which provides the foundation for any organization to harness the full potential of AI, driving innovation, efficiency and business value.

This new era of AI will create incredible economic growth and represent a profound shift as a percentage impact on global GDP, which is just over $100 trillion. So, when we look at the potential value driven by this next generation of AI technology, we may see a boost to global GDP of an additional $7 trillion to $10 trillion.

Embracing AI will help operators unlock new revenue streams, deliver superior customer experiences and pioneer future innovations for growth.

Operators can now leverage cloud services that are adaptive, purpose-built for telecommunications and span from near edge on-premises environments to the far edges of Earth and space to monetize investments, modernize networks, elevate customer experiences and streamline business operations with AI.

Our aim is to be the most trusted co-innovation partner for the telecommunications industry. We want to help accelerate telco transformation and empower operators to succeed in the era of AI, which is why we are committed to working with operators, enterprises and developers on the future cloud.

At MWC in Barcelona this week, we are announcing updates to our Azure for Operators portfolio to help operators seize the opportunity ahead in a cloud- and AI-native future.

AI opens new growth opportunities for operators. The biggest potential is that operators, as they embrace this new era of cloud and AI, can also help their customers in their own transformation.

For example, spam calls and malicious activities are a well-known menace and are growing exponentially, and often impact the most vulnerable members of society. Besides the annoyance, the direct cost of those calls adds up. For example, in the United States, FTC data for 2023 shows $850 million in reported fraud losses stemming from scam calls.

Today, we are announcing the public preview of Azure Operator Call Protection, a new service that uses AI to help protect consumers from scam calls. The service uses real-time analysis of voice content, alerting consumers who opt into the service when there is suspicious in-call activity. Azure Operator Call Protection works on any endpoint, mobile or landline, and it works entirely through the network without needing any app installation.

In the U.K., BT Group is trialing Azure Operator Call Protection to identify, educate and protect their customers from potential fraud, making it harder for bad actors to take advantage of their customers.

We are also announcing the public preview of Azure Programmable Connectivity (APC), which provides a unified, standard interface across operators networks. APC provides seamless access to Open Gateway for developers to create cloud and edge-native applications that interact with the intelligence of the network. APC also empowers operators to commercialize their network APIs and simplifies their access for developers and is available in the Azure Marketplace.

AI opens incredible opportunities to modernize network operations, providing new levels of real-time insights, intelligence and automation. Operators, such as Three UK, are already using Azure Operator Insights to eliminate data silos and deliver actionable business insights by enabling the collection and analysis of massive quantities of network data gathered from complex multi-vendor network functions. Designed for operator-specific workloads, operators tackle complex scenarios with Azure Operator Insights, such as understanding the health of their networks and the quality of their subscribers experiences.

Azure Operator Insights uses a modern data mesh architecture for dividing complex domains into manageable sub-domains called data products. These data products integrate large datasets from different sources and vendors to provide data visibility from disaggregated networks for comprehensive analytical and business insights. Using this data product factory capability, operators, network equipment providers and solution integrators can create unique data products for one customer or published to the Azure Marketplace for many customers to use.

Today, we are also announcing the limited preview of Copilot in Azure Operator Insights, a groundbreaking, operator-focused, generative AI capability helping operators move from reactive to proactive and predictive in tangible ways. Engineers use the Copilot to interact with network insights using natural language and receive simple explanations of what the data means and possible actions to take, resolving network issues quickly and accurately, ultimately improving customer satisfaction.

Copilot in Azure Operator Insights is delivering AI-infused insights to drive network efficiency for customers like Three UK and participating partners including Amdocs, Accenture and BMC Remedy. Three UK is using Copilot in Azure Operator Insights to unlock actionable intelligence on network health and customer experience quality of service, a process that previously took weeks or months to assess, is now possible to perform in minutes.

Additionally, with our next-generation hybrid cloud platform, Azure Operator Nexus, we offer the ability to future-proof the network to support mission-critical workloads, and power new revenue-generating services and applications. This immense opportunity is what drives operators to modernize their networks with Azure Operator Nexus, a carrier-grade, hybrid cloud platform and AI-powered automation and insights unlocking improved efficiency, scalability and reliability. Purpose-built for and validated by tier one operators to run mission-critical workloads, Azure Operator Nexus enables operators to run workloads on-premises or on Azure, where they can seamlessly deploy, manage, secure and monitor everything from the bare metal to the tenant.

E& UAE is taking advantage of the Azure Operator Nexus platform to lower total cost of ownership (TCO), leverage the power of AI to simplify operations, improve time to market and focus on their core competencies. And operations at AT&T that took months with previous generations of technology now take weeks to complete with Azure Operator Nexus.

We continue to build robust capabilities into Azure Operator Nexus, including new deployment options giving operators the flexibility to use one carrier-grade platform to deliver innovative solutions on near-edge, far-edge and enterprise edge.

Read more about the latest Azure for Operator updates here.

Operators are creating differentiation by collaborating with us to improve customer experiences and streamline their business operations with AI. Operators are leveraging Microsofts copilot stack and copilot experiences across our core products and services, such as Microsoft Copilot, Microsoft Copilot for M365 and Microsoft Security Copilot to drive productivity and improve customer experiences.

An average operator spends 20% ofannual revenue on capital expenditures.However, this investment does nottranslate into an equivalentincrease in revenue growth. Operators need to empower their service teams with data-driven insights to increase productivity, enhance care, use conversational AI to enable self-service, expedite issue resolution and deliver frictionless customer experiences at scale.

Together with our partner ecosystem, we are investing in creating a comprehensive set of solutions for the telecommunications industry. This includes the Azure for Operators portfolio a carrier-grade hybrid cloud platform, voice core, mobile core and multi-access edge compute, as well as our suite of generative AI solutions that holistically address the needs of network operators as they transform their networks.

As customers continue to embrace generative AI, we remain committed to working with operators and enterprises alike to future-proof networks and unlock new revenue streams in a cloud- and AI-native future.

Tags: AI, Azure for Operators, Azure Operator Call Protection, Azure Operator Insights, Azure Operator Nexus, Copilot in Azure Operator Insights

See original here:

Accelerating telco transformation in the era of AI - The Official Microsoft Blog - Microsoft

Whites must feel the direct pain from white supremacy – The Philadelphia Tribune

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

Original post:

Whites must feel the direct pain from white supremacy - The Philadelphia Tribune

Opinion: We cant denounce white supremacy only when it surfaces – Chattanooga Times Free Press

Advertisement Advertisement

February 20, 2024 at 6:01 p.m.

by LeBron Hill

When white supremacy shows its ugly head, everyone is quick to denounce it. Rightfully so.

On Saturday, a group of Nazis walked from Nashville's Lower Broadway to the state Capitol while waving flags bearing swastika symbols.

State

See the original post here:

Opinion: We cant denounce white supremacy only when it surfaces - Chattanooga Times Free Press

Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – Yahoo! Voices

Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.

Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.

These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.

Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.

Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.

Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.

Continue reading here:

Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - Yahoo! Voices

Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – TechRadar

Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.

Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.

These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.

Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.

Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.

Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.

Read more from the original source:

Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - TechRadar

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More – AnandTech

With its highly successful A100 and H100 processors for artificial intelligence (AI) and high-performance computing (HPC) applications, NVIDIA dominates AI datacenter deployments these days. But among large cloud service providers as well as emerging devices like software defined vehicles (SDVs) there is a global trend towards custom silicon. And, according to a report from Reuters, NVIDIA is putting together a new business unit to take on the custom chip market.

The new business unit will reportedly be led by vice president Dina McKinney, who has a wealth of experience from working at AMD, Marvell, and Qualcomm. The new division aims to address a wide range of sectors including automotive, gaming consoles, data centers, telecom, and others that could benefit from tailored silicon solutions. Although NVIDIA has not officially acknowledged the creation of this division, McKinneys LinkedIn profile as VP of Silicon Engineering reveals her involvement in developing silicon for 'cloud, 5G, gaming, and automotive,' hinting at the broad scope of her alleged business division.

Nine unofficial sources across the industry confirmed to Reuters the existence of the division, but NVIDIA has remained tight-lipped, only discussing its 2022 announcement regarding implementation of its networking technologies into third-party solutions. According to Reuters, NVIDIA has initiated discussions with leading tech companies, including Amazon, Meta, Microsoft, Google, and OpenAI, to investigate the potential for developing custom chips. This hints that NVIDIA intends to extend its offerings beyond the conventional off-the-shelf datacenter and gaming products, embracing the growing trend towards customized silicon solutions.

While using NVIDIA's A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA's AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.

Meanwhile, analysts are painting the possibility of an even bigger picture. Well-known GPU industry observer Jon Peddie Research notes that they believe that NVIDIA may be interested in addressing not only CSPs with datacenter offerings, but also consumer market due to huge volumes.

"NVIDIA made their loyal fan base in the consumer market which enabled them to establish the brand and develop ever more powerful processors that could then be used as compute accelerators," said JPR's president Jon Peddie. "But the company has made its fortune in the deep-pocked datacenter market where mission-critical projects see the cost of silicon as trivial to the overall objective. The consumer side gives NVIDIA the economy of scale so they can apply enormous resources to developing chips and the software infrastructure around those chips. It is not just CUDA, but a vast library of software tools and libraries."

Back in mid-2010s NVIDIA tried to address smartphones and tablets with its Tegra SoCs, but without much success. However, the company managed to secure a spot in supplying the application processor for the highly-successful Nintendo Switch console, and certainly would like expand this business. The consumer business allows NVIDIA to design a chip and then sell it to one client for many years without changing its design, amortizing the high costs of development over many millions of chips.

"NVIDIA is of course interested in expanding its footprint in consoles right now they are supplying the biggest selling console supplier, and are calling on Microsoft and Sony every week to try and get back in," Peddie said. "NVIDIA was in the first Xbox, and in PlayStation 3. But AMD has a cost-performance advantage with their APUs, which NVIDIA hopes to match with Grace. And since Windows runs on Arm, NVIDIA has a shot at Microsoft. Sony's custom OS would not be much of a challenge for NVIDIA."

See more here:

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More - AnandTech

Akamai CEO Tom Leighton on Q4 results: Cloud computing is our strongest growth area – CNBC

ShareShare Article via FacebookShare Article via TwitterShare Article via LinkedInShare Article via Email

Akamai Technologies CEO and co-founder Tom Leighton joins 'Squawk Box' to discuss the company's quarterly earnings results, which beat Wall Street's profit expectations but missed on revenue, growth outlook for its cloud computing services, and more.

06:35

Wed, Feb 14 20247:40 AM EST

Read this article:

Akamai CEO Tom Leighton on Q4 results: Cloud computing is our strongest growth area - CNBC

Confidential Computing and Cloud Sovereignty in Europe – The New Stack

Confidential computing is emerging as a potential game-changer in the cloud landscape, especially in Europe, where data sovereignty and privacy concerns take center stage. Will confidential computing be the future of cloud in Europe? Does it solve cloud sovereignty issues and adequately address privacy concerns?

At its core, confidential computing empowers organizations to safeguard their sensitive data even while its being processed. Unlike traditional security measures that focus on securing data at rest or in transit, confidential computing ensures end-to-end protection, including during computation. This is achieved by creating secure enclaves isolated areas within a computers memory where sensitive data can be processed without exposure to the broader system.

Cloud sovereignty, or the idea of retaining control and ownership over data within a country or region, is gaining traction as a critical aspect of digital autonomy. Europe, in its pursuit of technological independence, is embracing confidential computing as a cornerstone in building a robust cloud infrastructure that aligns with its values of privacy and security.

While the promise of confidential computing is monumental, challenges such as widespread adoption, standardization and education need to be addressed. Collaborative efforts between governments, industries and technology providers will be crucial in overcoming these challenges and unlocking the full potential of this transformative technology.

As Europe marches toward a future where data is not just a commodity but a sacred trust, confidential computing emerges as the key to unlocking the full spectrum of possibilities. By combining robust security measures with the principles of cloud sovereignty, Europe is poised to become a global leader in shaping a trustworthy and resilient digital future.

The era of confidential computing calls, and Europe stands prepared to respond. Margrethe Vestager, the European Commissions executive vice president for a Europe Fit for the Digital Age.

To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon Europe in Paris from Mar. 19-22, 2024.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Follow this link:

Confidential Computing and Cloud Sovereignty in Europe - The New Stack

Cloud Native Efficient Computing is the Way in 2024 and Beyond – ServeTheHome

Today we wanted to discuss cloud native and efficient computing. Many have different names for this, but it is going to be the second most important computing trend in 2024, behind the AI boom. Modern performance cores have gotten so big and fast that there is a new trend in the data center: using smaller and more efficient cores. Over the next few months, we are going to be doing a series on this trend.

As a quick note: We get CPUs from all of the major silicon players. Also, since we have tested these CPUs in Supermicro systems, we are going to say that they are all sponsors of this, but it is our own idea and content.

Let us get to the basics. Once AMD re-entered the server market (and desktop) with a competitive performance core in 2017, performance per core and core counts exploded almost as fast as pre-AI boom slideware on the deluge of data. As a result, cores got bigger, cache sizes expanded, and chips got larger. Each generation of chips got faster.

Soon, folks figured out a dirty secret in the server industry: faster per core performance is good if you license software by core, but there are a wide variety of applications that need cores, but not fast ones. Todays smaller efficient cores tend to be on the order of performance of a mainstream Skylake/ Cascade Lake Xeon from 2017-2021, yet they can be packed more densely into systems.

Consider this illustrative scenario that is far too common in the industry:

Here, we have several apps built by developers over the years. Each needs its own VM and each VM is generally between 2-8 cores. These are applications that need to be online 247 but are not ones that need massive amounts of compute. Good examples are websites that serve a specific line of business function but do not have hundreds of thousands of visitors. Also, these tend to be workloads that are already in cloud instances, VMs, or containers. As the industry has started to move away from hypervisors with per-core licensing or per-socket license constraints, scaling up to bigger, faster cores that are going underutilized makes little sense.

As a result, the industry realized it needed lower cost to produce chips that are chasing density instead of per-core performance. An awesome way to think about this is to think about trying to fit the maximum number of instances for those small line-of-business applications developed over the years that are sitting in 2-8 core VMs into as few servers as possible. There are other applications like this as well that are commonly shown such as nginx web servers, redis servers, and so forth. Another great example is that some online game instances require one core per user in the data center, even if that core is relatively meager. Sometimes just having more cores is, well, more cores = more better.

Once the constraints of legacy hypervisor per core/ per socket licensing are removed, then the question becomes how to fit as many cores on a package, and then how dense those packages can be deployed in a rack. One other trend we are seeing is not just more cores, but also lower clock speed cores. CPUs that have a maximum frequency in the 2-3GHz range today tend to be considerably more power efficient than those with frequencies of P-core only servers in the 4GHz+ range and desktop CPUs now pushing well over 5GHz. This is the voltage frequency curve at work. If your goal is to have more cores, but do not need maximum per-core performance, then lowering the performance per core by 25% but decreasing the power by 40% or more, means that all of those applications are being serviced with less power.

Less power is important for a number of reasons. Today, the biggest reason is the AI infrastructure build-out. If you, for example, saw our 49ers Levis Stadium tour video, that is a perfect example of a data center that is not going to expand in footprint and can only expand cooling so much. It also is a prime example of a location that needs AI servers for sports analytics.

That type of constraint where the same traditional work needs to get done, in a data center footprint that is not changing, while adding more high-power AI servers is a key reason cloud-native compute is moving beyond the cloud. Transitioning applications running on 2017-2021 era Xeon servers to modern cloud-native cores with approximately the same performance per core can mean 4-5x the density per system at ~2x the power consumption. As companies release new generations of CPUs, the density figures are increasing at a steep rate.

We showed this at play with the same era of servers and modern P-core servers in our 5th Gen Intel Xeon Processors Emerald Rapids review.

We also covered the consolidation just between P-core generations in the accompanying video. We are going to have an article with the current AMD EPYC Bergamo parts very soon in a similar vein.

If you are not familiar with the current players in the cloud-native CPU market, that you can buy for your data centers/ colocation, here is a quick run-down.

The AMD EPYC Bergamo was AMDs first foray into cloud-native compute. Onboard, it has up to 128 cores/ 256 threads and is the densest publicly available x86 server CPU currently available.

AMD removed L3 cache from its P-core design, lowered the maximum all core frequencies to decrease the overall power, and did extra work to decrease the core size. The result is the same Zen 4 core IP, with less L3 cache and less die area. Less die area means more can be packaged together onto a CPU.

Some stop with Bergamo, but AMD has another Zen 4c chip in the market. The AMD EPYC 8004 series, codenamed Siena also uses Zen 4c but with half the memory channels, less PCIe Gen5 I/O and single-socket only operation.

Some organizations that are upgrading from popular dual 16 core Xeon servers can move to single socket 64-core Siena platforms and stay within a similar power budget per U while doubling the core count per U using 1U servers.

AMD markets Siena as the edge/ embedded part, but we need to recognize this is in the vein of current gen cloud native processors.

Arm has been making a huge splash into the space. The only Arm server CPU vendor out there for those buying their own servers, is Ampere led by many of the former Intel Xeon team.

Ampere has two main chips, the Ampere Altra (up to 80 cores) and Altra Max (up to 128 cores.) These use the same socket and so most servers can support either. The Max just came out later to support up to 128 cores.

Here, the focus on cloud-native compute is even more pronounced. Instead of having beefy floating point compute capabilities, Ampere is using Arm Neoverse N1 cores that focus on low power integer performance. It turns out, a huge number of workloads like serving web pages are mostly integer performance driven. While these may not be the cores if you wanted to build a Linpack Top500 supercomputer, they are great for web servers. Since the cloud-native compute idea was to build cores and servers that can run workloads with little to no compromise, but at lower power, that is what Arm and Ampere built.

Next up will be the AmpereOne. This is already shipping, but we have yet to get one in the lab.

AmpereOne uses a custom designed core for up to 192 cores per socket.

Assuming you could buy a server with AmpereOne, you would get more core density than an AMD EPYC Bergamo server (192 vs 128 cores) but you would get fewer threads (192 vs 256 threads.) If you had 1 vCPU VMs, AmpereOne would be denser. If you had 2 vCPU VMs, Bergamo would be denser. SMT has been a challenge in the cloud due to some of the security surfaces it exposes.

Next in the market will be the Intel Sierra Forest. Intels new cloud-native processor will offer up to 144/ 288 cores. Perhaps most importantly, it is aiming for a low power per core metric while also maintaining x86 compatibility.

Intel is taking its efficient E-core line and bringing it to the Xeon market. We have seen massive gains in E-core performance in both embedded as well as lower-power lines like the Alder Lake-N where we saw greater than 2x generational performance per chip. Now, Intel is splitting its line into P-cores for compute intensive workloads and E-cores for high-density scale-out compute.

Intel will offer Granite Rapids as an update to the current 5th Gen Xeon Emerald Rapids for all P-core designs later in 2024. Sierra Forest will be the first generation all E-core design and is planned for the first half of 2024. Intel already has announced the next generation Clearwater Forest will continue the all E-core line. As a full disclosure, this is a launch I have been excited about for years.

We are going to quickly mention the NVIDIA Grace Superchip here. With up to 144 cores across two dies packaged along with LPDDR memory.

While at 500W and usingArm Neoverse V2 performance cores, one would not think of this as a cloud native processor, it does have something really different. The Grace Superchip has onboard memory packaged alongside its Arm CPUs. As a result, that 500W is actually for CPU and memory. There are applications that are primarily memory bandwidth bound, not necessarily core count bound. For those applications, something like a Grace Superchip can actually end up being a lower-power solution than some of the other cloud-native offerings. These are also not the easiest to get, and are priced at a significant premium. One could easily argue these are not cloud-native, but if our definition is doing the same work in a smaller more efficient footprint, then the Grace Superchip might actually fall into that category for a subset of workloads.

If you were excited for our 2nd to 5th Gen Intel Xeon server consolidation piece, get ready. To say that the piece we did in late 2023 was just the beginning would be an understatement.

While many are focused on AI build-outs, projects to shrink portions of existing compute footprints by 75% or more are certainly possible, making more space, power, and cooling available for new AI servers. Also, just from a carbon footprint perspective, using newer and significantly more power-efficient architectures to do baseline application hosting makes a lot of sense.

The big question in the industry right now on CPU compute is whether cloud native energy-efficient computing is going to be 25% of the server CPU market in 3-5 years, or if it is going to be 75%. My sense is that it likely could be 75%, or perhaps should be 75%, but organizations are slow to move. So at STH, we are going to be doing a series to help overcome that organizational inertia and get compute on the right-sized platforms.

More:

Cloud Native Efficient Computing is the Way in 2024 and Beyond - ServeTheHome

ChatGPT Stock Predictions: 3 Cloud Computing Companies the AI Bot Thinks Have 10X Potential – InvestorPlace

In a world continually reshaped by technology, cloud computing stands as a pivotal force driving transformation. With its rapid ascent, early investors in cloud computing stocks have seen their investments significantly outperform the S&P 500. This serves as a highlight to the sectors explosive growth and its vital impact on business and consumer landscapes.

2024 shouldnt be any different, which is why, in seizing this momentum, I turned to ChatGPT, initiating my research on the top cloud computing picks with a precise ask.

Kindly conduct an in-depth exploration of the current dynamics and trends characterizing the United States stock market as of February 2024.

I proceeded with a targeted request to unearth gems within the cloud computing arena.

Based on this, suggest three cloud computing stocks that have 10 times potential.

The crucial insights provided by ChatGPT lay the foundation for our piece covering the three cloud computing stocks pinpointed by AI as top contenders poised to deliver stellar returns.

Source: Karol Ciesluk / Shutterstock.com

Datadog Inc. (NASDAQ:DDOG) has emerged as a stalwart in the observability and security platform sector for cloud applications. It witnessed an impressive 61.76% stock surge in the past year and currently trades at $134.91.

Further, the companys third quarter 2023 financial report underscores its robust performance. It showed a 25% year-over-year (YOY) revenue growth, reaching $547.5 million. Additionally compelling is the significant uptick in customers from 22,200 to 26,800. This signals the firms efficiency in expanding its client base and driving revenue.

Simultaneously, Datadog generative artificial intelligence (AI) and large language models (LLMs) foresee potential growth in cloud workloads. AI-related usage comprised 2.5% of third-quarter annual recurring revenue. This resonates notably with next-gen AI-native customers and positions the company for sustained growth in this dynamic landscape.

The projected $568 million revenue for the fourth quarter of 2024 reflects a commitment to sustained expansion. Also, it underlines the companys ability to adapt to market dynamics and capitalize on emerging opportunities.

Source: Sundry Photography / Shutterstock.com

Zscaler, Inc. (NASDAQ:ZS) is a pioneer in providing cloud-based information security solutions.

The company made a noteworthy shift to 100% renewable energy for its offices and data centers in November 2021. This solidifies its standing as an environmental steward and leader in the market. Also, CEO Jay Chaudhry emphasizes that beyond providing top-notch cybersecurity, Zscalers cloud services contribute to environmental conservation by eliminating the need for on-premises hardware.

Beyond sustainability, Zscaler thrives financially, boasting 7,700 customers, including 468, contributing over $1 million in annual recurring revenue (ARR). In the first quarter, non-GAAP earnings per share exceeded expectations at 67 cents, beating estimates by 18 cents. And, revenue soared to $496.7 million, a remarkable 39.7% YOY bump.

Looking forward, second-quarter guidance forecasts revenue between $505 million and $507 million, indicating a robust 30.5% YOY growth. Also, it has an ambitious target of $2.09 billion to $2.10 billion for the entire fiscal year. Thus, Zscaler attributes its success to a potent combination of technology and financial acumen.

Source: Sundry Photography / Shutterstock.com

Snowflake (NASDAQ:SNOW) stands resilient amid market fluctuations, emerging as a top performer in the cloud stock landscape over the past year.

Moreover, while yet to reach previous all-time highs, its strategic focus on AI integrations has propelled its recent success. Positioned at the intersection of the enduring narrative around AI and the high-interest cloud computing sector, Snowflake captures attention with its forward-looking approach.

Financially, Snowflake demonstrates robust figures with a gross profit margin of 67.09%, signaling financial strength. Additionally, the impressive 40.87% revenue growth significantly outpaces the sector median by 773.93%. This attests to the companys agility in navigating market dynamics.

Peering into the future, Snowflakes fourth-quarter guidance paints a promising picture, with an anticipated product revenue falling between $716 million and $721 million. Elevating the outlook, the fiscal year 2024 projection boldly sets a target of $2.65 billion in product revenue. Therefore, this ambitious trajectory demonstrates Snowflakes adept market navigation, savvy AI integration, and steadfast commitment to robust financial performance.

On the publication date, Muslim Farooque did not have (directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Muslim Farooque is a keen investor and an optimist at heart. A life-long gamer and tech enthusiast, he has a particular affinity for analyzing technology stocks. Muslim holds a bachelors of science degree in applied accounting from Oxford Brookes University.

See the rest here:

ChatGPT Stock Predictions: 3 Cloud Computing Companies the AI Bot Thinks Have 10X Potential - InvestorPlace

The 3 Best Cloud Computing Stocks to Buy in February 2024 – InvestorPlace

These cloud computing stocks can march higher in 2024

Source: Blackboard / Shutterstock

Cloud computing has helped corporations increase productivity and reduce costs. Once a business uses cloud computing, it continues to pay annual fees to keep its digital infrastructure.

Cloud solutions can quickly turn into a companys backbone. Its one of the last costs some companies will think of removing. Firms that operate in the cloud computing industry often benefit from high renewal rates, recurring revenue and the ability to raise prices in the future. Investors can capitalize on the trend with these cloud computing stocks.

Source: Tada Images / Shutterstock.com

Amazon (NASDAQ:AMZN) had a record-breaking Black Friday and optimized its logistics to offer the fastest delivery speeds ever for Amazon Prime members. Over seven billion products arrived at peoples doors on the same or the next day or the order. Its a testament to Amazons vast same-day delivery network that encompasses 110 U.S. metro areas and more than 55 dedicated same-day sites across the United States.

The delivery network makes Amazon Prime more enticing for current members and people on the fence. The companys efforts paid off and resulted in 14% year-over-year (YoY) revenue growth in the fourth quarter of 2023.

Amazons ventures into artificial intelligence (AI) can also lead to meaningful stock appreciation. The companys generative AI investments have paid off and strengthened Amazon Web Services value proposition. Developers can easilyscale AI appswith Amazons Bedrock. These resources can help corporations increase productivity and generate more sales.

Innovations like these will help Amazon generate more traction for its e-commerce and cloud computing segments. The AI sector has many tailwinds that can help Amazon stock march higher for long-term investors.

Source: IgorGolovniov / Shutterstock.com

Alphabet (NASDAQ:GOOG, NASDAQ:GOOGL) is a staple in many funds. The equity has outperformed the broader market with a 58% gain over the past year. Shares are up by 170% over the past five years.

Shares trade at a reasonable 22x forward P/E ratio. The stock initially lost some value after earnings but has parried some of its losses. The earnings report wasnt too bad, with 13% YoY revenue growth and 52% YoY net income growth.

Investors may have wanted higher numbers since Meta Platforms (NASDAQ:META) reported better results. However, a 7% drop in earnings didnt make much sense. The business model is still robust and is accelerating revenue and earnings growth. Alphabet also has a lengthy history of rewarding long-term investors.

Many analysts believe the equity looks like a solid long-term buy. The average price target implies a 9% upside. The highest price target of $175 per share suggests the equity can rally 16.5% from current levels.

Source: Sundry Photography / Shutterstock.com

ServiceNow (NYSE:NOW) is an information technology company with an advanced cloud platform that helps corporations increase their productivity and sales. The equity has comfortably outperformed the market with 1-year and 5-year gains of 77% and 248%, respectively.

The company currently trades at a 61x forward P/E ratio, meaning youll need a long-term outlook to justify the valuation. ServiceNow certainly delivers on the financial front, increasing revenue by 26% YoY in Q4 2023. ServiceNow also reported $295 million in GAAP net income, a 97% YoY improvement. The company generated $150 million in GAAP net income during the same period last year.

Revenue is going up, and profit margins are accelerating. These are two promising signs for a company that boasts a 99% renewal rate for its core product. The companys subscription revenue continues to grow at a fast clip and generates predictable annual recurring revenue.

On this date of publication, Marc Guberti held a long position in NOW. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Marc Guberti is a finance freelance writer at InvestorPlace.com who hosts the Breakthrough Success Podcast. He has contributed to several publications, including the U.S. News & World Report, Benzinga, and Joy Wallet.

Read the original:

The 3 Best Cloud Computing Stocks to Buy in February 2024 - InvestorPlace

8 Key Features of Cloud Computing You Shouldn’t Miss – Techopedia

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

See original here:

8 Key Features of Cloud Computing You Shouldn't Miss - Techopedia

Ex VR/AR lead at Unity joins new spatial computing cloud platform to enable the open metaverse at scale, AI, Web3 – Cointelegraph

The metaverse is reshaping the digital world and entertainment landscape. Ozones platform empowers businesses to create, launch and profit from various 3D projects, ranging from simple galleries or meetup spaces to AAA games and complex 3D simulations, transforming how we engage with immersive content in the spatial computing era.

Apples Vision OS launch is catalyzing mainstream adoption of interactive spatial content, opening new horizons for businesses. 95% of business leaders anticipate a positive impact from the metaverse within the next five to ten years, potentially establishing a $5 trillion market by 2030.

Ozone cloud platform has the potential to become the leading spatial computing cloud. Source: Ozone

The future of 3D technology seamlessly blends the virtual and physical realms using spatial computing technology. But, spatial computing can be challenging, especially when the tools are limited and the methods for creating 3D experiences are outdated.

A well-known venture capital firm, a16z, recently pointed out that its time to change how game engines are used for spatial computing, describing the future of 3D engines as a cloud-based 3D creation Engine and this is exactly what the Ozone platform is.

The Ozone platform is a robust cloud computing cloud for 3D applications. Source: Ozone

The platforms OZONE token is an innovative implementation of crypto at a software-as-a-service (SaaS) platform level. You can think of the OZONE token as the core platform token that will unlock higher levels of spatial and AI computing over time, fully deployed and interoperating throughout worlds powered by our cloud.

Ozone is fully multichain and cross-chain, meaning it supports all wallets, blockchains, NFT collections and cryptocurrencies and already integrated several in the web studio builder with full interoperability across spatial experiences said Jay Essadki, executive director for Ozone.

Ozone Studios already integrated and validated spatial computing cross-chain interoperability. Source: Ozone Studio

He added, You can think of the Ozone composable spatial computing cloud as an operating system, or as a development environment. It continuously evolves by integrating new technologies and services.

The OZONE token, positioned as the currency of choice, offers not just discounts and commercial benefits but also, through the integration with platform oracles and cross-chain listings, enables the first comprehensive horizontally and vertically integrated Web3 ecosystem for the metaverse and spatial computing era.

Ozone eliminates technical restrictions and makes spatial computing, Web3 and AI strategies accessible to organizations looking to explore the potential of the metaverse with almost no technical overhead or debt.

Ozone is coming out of stealth with a cloud infrastructure supported by AI and Web3 microservices and is expanding its executive, engineering and advisory teams as it raises more capital in view to replace legacy game engines such as Unreal or Unity.

At the same time, Ozone provides full support for those engines created assets to be deployed on the Ozone platform across Web2 and Web3 alike.

Also Ozone is on a roll of enterprise and government discussions and has been establishing and closing enterprise and government customer relationships in view of initial cloud infrastructure deployment.

Ozone welcomes new advisoers as the platform comes out of stealth.

Ozones new 2024 advisors to make the open metaverse happen:

Ozone will finalize a full game engine based on fully integrated micro-templates that will make the build and deployment of all games and 3D spatial computing as simple as clicking a few buttons, and it is already working.

The upcoming features on the Ozone 3D Web Studio. Source: Ozone

Ozone is announcing a new suite of templatized games. With multi-AI integration, three completed games (Quest, Hide and Seek and RPG, coming in 2024) and more are underway.

It opens up the way to building interactive 3D experiences in a new way.

Ozone helps companies to build and share 3D experiences. Source: Ozone

At the heart of Ozone is the innovative Studio 3D development platform, complemented by a marketplace infrastructure to support e-commerce and the economy.

Ozones SaaS platform empowers businesses to create, deploy and monetize Spatial Computing experiences at scale for Web3 or traditional e-commerce applications. The platforms features, including social infrastructure, AI integration and gamification elements, enhance the interactive aspect of 3D experiences, digital twins and spatial data automation, while providing full interoperability and portability of content and data across experiences and across devices

Ozones vision of becoming the industry standard for interactive 3D development, with compatibility across devices and accessibility from any device, positions it as a catalyst for innovation in media and entertainment. Ozone is set to play a key role in shaping the future of immersive spatial web experiences.

Ozone has secured investments from prominent Web3 VC funds and is opening its first-ever VC equity financing round.

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you with all important information that we could obtain in this sponsored article, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor can this article be considered as investment advice.

View post:

Ex VR/AR lead at Unity joins new spatial computing cloud platform to enable the open metaverse at scale, AI, Web3 - Cointelegraph

Get Rich Quick With These 3 Cloud Computing Stocks to Buy Now – InvestorPlace

As part of our day-to-day life, cloud computing companies are completely necessary as they keep us interconnected and take care of streamlining our operations, allowing us to be more efficient and effective. They also make many tasks much easier to perform through their great technological solutions. These solutions can be applied from the financial area to the human resources area.

If you want to take advantage of the great boom and the strong demand of these companies, here are three cloud computing stocks to buy quick and that you can consider adding to your portfolio.

Source: IgorGolovniov / Shutterstock.com

Behind pharmaceutical companies and biotech companies there is a big figure that is responsible for providing them with cloud-based software solutions to streamline their entire operations, that big figure is Veeva Systems Inc (NYSE:VEEV).

Financially VEEV is completely stable and are always on the move. Its revenues speak for themselves as they are on the rise and if we focus on net income, it is growing consistently reflected in their market performance.

One of the particularities that distinguishes this company is its capacity for innovation.

For example, their most recent release, the Veeva Compass Suite, is a comprehensive set of tools that gives healthcare companies a much deeper understanding of existing patient populations and a picture of healthcare provider behaviors.

Its practically like giving you a complete and specific picture of the entire healthcare network landscape.

On top of that, they make a real impact on the lives of patients, as their training solutions are helping many companies modernize their employee qualification processes.

Source: Sundry Photography / Shutterstock.com

Next on the list of companies involved in the cloud computing sector is Workday Inc (NASDAQ:WDAY), which specializes in providing companies with cloud-based enterprise applications for financial management and human resources.

They provide practical software-based solutions that allow companies to streamline their processes in managing their financial operations and human talent.

One of the things that makes this company completely attractive is its great financial performance, since in their last financial quarter they indicated that their revenues increased by 16.7% compared to the same period of the previous year, which can be translated into $1.87 billion, what good figures.

As part of their most important metrics we have subscription revenues, which increased much stronger than their normal revenues, with 18.1%, reaching approximately $1.69 billion.

In addition to these incredible numbers, they are making important strategic alliances, where they have partnered with McLaren Racing to provide them with innovative solutions.

This partnership demonstrates the versatility of Workday, as they not only provide business solutions in traditional sectors, but they also have a large participation in completely competitive industries.

Source: Jonathan Weiss / Shutterstock.com

And to close the list of these companies completely necessary in our day to day, we have the giant Oracle Corporation (NYSE:ORCL), a technology company completely recognized worldwide.

This company specializes entirely in data management solutions and of course in cloud computing. One of its main commitments is to help organizations improve their efficiency and optimize their operations through completely innovative technological solutions.

Financially, this company is in a phase of solid growth specifically in its total revenue and in its cloud division.

One of the stars of this company is its cloud application suite, which has gained a strong foothold in the healthcare sector.

Large and important institutions such as Baptist Health Care and the University of Chicago Medicine, are adopting the solutions provided by this company to improve their experience with employees and of course the care of their patients.

In addition, they are expanding their global presence with the grand opening of a new cloud region in Nairobi, Kenya. This major expansion makes clear their important commitment to economic and technological development in the greater African continent.

Oracle Cloud Infrastructures (OCI) unique infrastructure allows them the great opportunity and advantage to offer governments and businesses the opportunity to drive innovation and growth in the region.

As of this writing, Gabriel Osorio-Mazzilli did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines(no position)

Gabriel Osorio is a former Goldman Sachs and Citigroup employee. He possesses discipline in bottom-up value investing and volatility-based long/short equities trading.

Read more here:

Get Rich Quick With These 3 Cloud Computing Stocks to Buy Now - InvestorPlace