Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – Yahoo! Voices

Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.

Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.

These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.

Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.

Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.

Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.

Continue reading here:

Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - Yahoo! Voices

Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – TechRadar

Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.

Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.

These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.

Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.

Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.

Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.

Read more from the original source:

Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - TechRadar

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More – AnandTech

With its highly successful A100 and H100 processors for artificial intelligence (AI) and high-performance computing (HPC) applications, NVIDIA dominates AI datacenter deployments these days. But among large cloud service providers as well as emerging devices like software defined vehicles (SDVs) there is a global trend towards custom silicon. And, according to a report from Reuters, NVIDIA is putting together a new business unit to take on the custom chip market.

The new business unit will reportedly be led by vice president Dina McKinney, who has a wealth of experience from working at AMD, Marvell, and Qualcomm. The new division aims to address a wide range of sectors including automotive, gaming consoles, data centers, telecom, and others that could benefit from tailored silicon solutions. Although NVIDIA has not officially acknowledged the creation of this division, McKinneys LinkedIn profile as VP of Silicon Engineering reveals her involvement in developing silicon for 'cloud, 5G, gaming, and automotive,' hinting at the broad scope of her alleged business division.

Nine unofficial sources across the industry confirmed to Reuters the existence of the division, but NVIDIA has remained tight-lipped, only discussing its 2022 announcement regarding implementation of its networking technologies into third-party solutions. According to Reuters, NVIDIA has initiated discussions with leading tech companies, including Amazon, Meta, Microsoft, Google, and OpenAI, to investigate the potential for developing custom chips. This hints that NVIDIA intends to extend its offerings beyond the conventional off-the-shelf datacenter and gaming products, embracing the growing trend towards customized silicon solutions.

While using NVIDIA's A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA's AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.

Meanwhile, analysts are painting the possibility of an even bigger picture. Well-known GPU industry observer Jon Peddie Research notes that they believe that NVIDIA may be interested in addressing not only CSPs with datacenter offerings, but also consumer market due to huge volumes.

"NVIDIA made their loyal fan base in the consumer market which enabled them to establish the brand and develop ever more powerful processors that could then be used as compute accelerators," said JPR's president Jon Peddie. "But the company has made its fortune in the deep-pocked datacenter market where mission-critical projects see the cost of silicon as trivial to the overall objective. The consumer side gives NVIDIA the economy of scale so they can apply enormous resources to developing chips and the software infrastructure around those chips. It is not just CUDA, but a vast library of software tools and libraries."

Back in mid-2010s NVIDIA tried to address smartphones and tablets with its Tegra SoCs, but without much success. However, the company managed to secure a spot in supplying the application processor for the highly-successful Nintendo Switch console, and certainly would like expand this business. The consumer business allows NVIDIA to design a chip and then sell it to one client for many years without changing its design, amortizing the high costs of development over many millions of chips.

"NVIDIA is of course interested in expanding its footprint in consoles right now they are supplying the biggest selling console supplier, and are calling on Microsoft and Sony every week to try and get back in," Peddie said. "NVIDIA was in the first Xbox, and in PlayStation 3. But AMD has a cost-performance advantage with their APUs, which NVIDIA hopes to match with Grace. And since Windows runs on Arm, NVIDIA has a shot at Microsoft. Sony's custom OS would not be much of a challenge for NVIDIA."

See more here:

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More - AnandTech

Confidential Computing and Cloud Sovereignty in Europe – The New Stack

Confidential computing is emerging as a potential game-changer in the cloud landscape, especially in Europe, where data sovereignty and privacy concerns take center stage. Will confidential computing be the future of cloud in Europe? Does it solve cloud sovereignty issues and adequately address privacy concerns?

At its core, confidential computing empowers organizations to safeguard their sensitive data even while its being processed. Unlike traditional security measures that focus on securing data at rest or in transit, confidential computing ensures end-to-end protection, including during computation. This is achieved by creating secure enclaves isolated areas within a computers memory where sensitive data can be processed without exposure to the broader system.

Cloud sovereignty, or the idea of retaining control and ownership over data within a country or region, is gaining traction as a critical aspect of digital autonomy. Europe, in its pursuit of technological independence, is embracing confidential computing as a cornerstone in building a robust cloud infrastructure that aligns with its values of privacy and security.

While the promise of confidential computing is monumental, challenges such as widespread adoption, standardization and education need to be addressed. Collaborative efforts between governments, industries and technology providers will be crucial in overcoming these challenges and unlocking the full potential of this transformative technology.

As Europe marches toward a future where data is not just a commodity but a sacred trust, confidential computing emerges as the key to unlocking the full spectrum of possibilities. By combining robust security measures with the principles of cloud sovereignty, Europe is poised to become a global leader in shaping a trustworthy and resilient digital future.

The era of confidential computing calls, and Europe stands prepared to respond. Margrethe Vestager, the European Commissions executive vice president for a Europe Fit for the Digital Age.

To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon Europe in Paris from Mar. 19-22, 2024.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Follow this link:

Confidential Computing and Cloud Sovereignty in Europe - The New Stack

Akamai CEO Tom Leighton on Q4 results: Cloud computing is our strongest growth area – CNBC

ShareShare Article via FacebookShare Article via TwitterShare Article via LinkedInShare Article via Email

Akamai Technologies CEO and co-founder Tom Leighton joins 'Squawk Box' to discuss the company's quarterly earnings results, which beat Wall Street's profit expectations but missed on revenue, growth outlook for its cloud computing services, and more.

06:35

Wed, Feb 14 20247:40 AM EST

Read this article:

Akamai CEO Tom Leighton on Q4 results: Cloud computing is our strongest growth area - CNBC

Cloud Native Efficient Computing is the Way in 2024 and Beyond – ServeTheHome

Today we wanted to discuss cloud native and efficient computing. Many have different names for this, but it is going to be the second most important computing trend in 2024, behind the AI boom. Modern performance cores have gotten so big and fast that there is a new trend in the data center: using smaller and more efficient cores. Over the next few months, we are going to be doing a series on this trend.

As a quick note: We get CPUs from all of the major silicon players. Also, since we have tested these CPUs in Supermicro systems, we are going to say that they are all sponsors of this, but it is our own idea and content.

Let us get to the basics. Once AMD re-entered the server market (and desktop) with a competitive performance core in 2017, performance per core and core counts exploded almost as fast as pre-AI boom slideware on the deluge of data. As a result, cores got bigger, cache sizes expanded, and chips got larger. Each generation of chips got faster.

Soon, folks figured out a dirty secret in the server industry: faster per core performance is good if you license software by core, but there are a wide variety of applications that need cores, but not fast ones. Todays smaller efficient cores tend to be on the order of performance of a mainstream Skylake/ Cascade Lake Xeon from 2017-2021, yet they can be packed more densely into systems.

Consider this illustrative scenario that is far too common in the industry:

Here, we have several apps built by developers over the years. Each needs its own VM and each VM is generally between 2-8 cores. These are applications that need to be online 247 but are not ones that need massive amounts of compute. Good examples are websites that serve a specific line of business function but do not have hundreds of thousands of visitors. Also, these tend to be workloads that are already in cloud instances, VMs, or containers. As the industry has started to move away from hypervisors with per-core licensing or per-socket license constraints, scaling up to bigger, faster cores that are going underutilized makes little sense.

As a result, the industry realized it needed lower cost to produce chips that are chasing density instead of per-core performance. An awesome way to think about this is to think about trying to fit the maximum number of instances for those small line-of-business applications developed over the years that are sitting in 2-8 core VMs into as few servers as possible. There are other applications like this as well that are commonly shown such as nginx web servers, redis servers, and so forth. Another great example is that some online game instances require one core per user in the data center, even if that core is relatively meager. Sometimes just having more cores is, well, more cores = more better.

Once the constraints of legacy hypervisor per core/ per socket licensing are removed, then the question becomes how to fit as many cores on a package, and then how dense those packages can be deployed in a rack. One other trend we are seeing is not just more cores, but also lower clock speed cores. CPUs that have a maximum frequency in the 2-3GHz range today tend to be considerably more power efficient than those with frequencies of P-core only servers in the 4GHz+ range and desktop CPUs now pushing well over 5GHz. This is the voltage frequency curve at work. If your goal is to have more cores, but do not need maximum per-core performance, then lowering the performance per core by 25% but decreasing the power by 40% or more, means that all of those applications are being serviced with less power.

Less power is important for a number of reasons. Today, the biggest reason is the AI infrastructure build-out. If you, for example, saw our 49ers Levis Stadium tour video, that is a perfect example of a data center that is not going to expand in footprint and can only expand cooling so much. It also is a prime example of a location that needs AI servers for sports analytics.

That type of constraint where the same traditional work needs to get done, in a data center footprint that is not changing, while adding more high-power AI servers is a key reason cloud-native compute is moving beyond the cloud. Transitioning applications running on 2017-2021 era Xeon servers to modern cloud-native cores with approximately the same performance per core can mean 4-5x the density per system at ~2x the power consumption. As companies release new generations of CPUs, the density figures are increasing at a steep rate.

We showed this at play with the same era of servers and modern P-core servers in our 5th Gen Intel Xeon Processors Emerald Rapids review.

We also covered the consolidation just between P-core generations in the accompanying video. We are going to have an article with the current AMD EPYC Bergamo parts very soon in a similar vein.

If you are not familiar with the current players in the cloud-native CPU market, that you can buy for your data centers/ colocation, here is a quick run-down.

The AMD EPYC Bergamo was AMDs first foray into cloud-native compute. Onboard, it has up to 128 cores/ 256 threads and is the densest publicly available x86 server CPU currently available.

AMD removed L3 cache from its P-core design, lowered the maximum all core frequencies to decrease the overall power, and did extra work to decrease the core size. The result is the same Zen 4 core IP, with less L3 cache and less die area. Less die area means more can be packaged together onto a CPU.

Some stop with Bergamo, but AMD has another Zen 4c chip in the market. The AMD EPYC 8004 series, codenamed Siena also uses Zen 4c but with half the memory channels, less PCIe Gen5 I/O and single-socket only operation.

Some organizations that are upgrading from popular dual 16 core Xeon servers can move to single socket 64-core Siena platforms and stay within a similar power budget per U while doubling the core count per U using 1U servers.

AMD markets Siena as the edge/ embedded part, but we need to recognize this is in the vein of current gen cloud native processors.

Arm has been making a huge splash into the space. The only Arm server CPU vendor out there for those buying their own servers, is Ampere led by many of the former Intel Xeon team.

Ampere has two main chips, the Ampere Altra (up to 80 cores) and Altra Max (up to 128 cores.) These use the same socket and so most servers can support either. The Max just came out later to support up to 128 cores.

Here, the focus on cloud-native compute is even more pronounced. Instead of having beefy floating point compute capabilities, Ampere is using Arm Neoverse N1 cores that focus on low power integer performance. It turns out, a huge number of workloads like serving web pages are mostly integer performance driven. While these may not be the cores if you wanted to build a Linpack Top500 supercomputer, they are great for web servers. Since the cloud-native compute idea was to build cores and servers that can run workloads with little to no compromise, but at lower power, that is what Arm and Ampere built.

Next up will be the AmpereOne. This is already shipping, but we have yet to get one in the lab.

AmpereOne uses a custom designed core for up to 192 cores per socket.

Assuming you could buy a server with AmpereOne, you would get more core density than an AMD EPYC Bergamo server (192 vs 128 cores) but you would get fewer threads (192 vs 256 threads.) If you had 1 vCPU VMs, AmpereOne would be denser. If you had 2 vCPU VMs, Bergamo would be denser. SMT has been a challenge in the cloud due to some of the security surfaces it exposes.

Next in the market will be the Intel Sierra Forest. Intels new cloud-native processor will offer up to 144/ 288 cores. Perhaps most importantly, it is aiming for a low power per core metric while also maintaining x86 compatibility.

Intel is taking its efficient E-core line and bringing it to the Xeon market. We have seen massive gains in E-core performance in both embedded as well as lower-power lines like the Alder Lake-N where we saw greater than 2x generational performance per chip. Now, Intel is splitting its line into P-cores for compute intensive workloads and E-cores for high-density scale-out compute.

Intel will offer Granite Rapids as an update to the current 5th Gen Xeon Emerald Rapids for all P-core designs later in 2024. Sierra Forest will be the first generation all E-core design and is planned for the first half of 2024. Intel already has announced the next generation Clearwater Forest will continue the all E-core line. As a full disclosure, this is a launch I have been excited about for years.

We are going to quickly mention the NVIDIA Grace Superchip here. With up to 144 cores across two dies packaged along with LPDDR memory.

While at 500W and usingArm Neoverse V2 performance cores, one would not think of this as a cloud native processor, it does have something really different. The Grace Superchip has onboard memory packaged alongside its Arm CPUs. As a result, that 500W is actually for CPU and memory. There are applications that are primarily memory bandwidth bound, not necessarily core count bound. For those applications, something like a Grace Superchip can actually end up being a lower-power solution than some of the other cloud-native offerings. These are also not the easiest to get, and are priced at a significant premium. One could easily argue these are not cloud-native, but if our definition is doing the same work in a smaller more efficient footprint, then the Grace Superchip might actually fall into that category for a subset of workloads.

If you were excited for our 2nd to 5th Gen Intel Xeon server consolidation piece, get ready. To say that the piece we did in late 2023 was just the beginning would be an understatement.

While many are focused on AI build-outs, projects to shrink portions of existing compute footprints by 75% or more are certainly possible, making more space, power, and cooling available for new AI servers. Also, just from a carbon footprint perspective, using newer and significantly more power-efficient architectures to do baseline application hosting makes a lot of sense.

The big question in the industry right now on CPU compute is whether cloud native energy-efficient computing is going to be 25% of the server CPU market in 3-5 years, or if it is going to be 75%. My sense is that it likely could be 75%, or perhaps should be 75%, but organizations are slow to move. So at STH, we are going to be doing a series to help overcome that organizational inertia and get compute on the right-sized platforms.

More:

Cloud Native Efficient Computing is the Way in 2024 and Beyond - ServeTheHome

ChatGPT Stock Predictions: 3 Cloud Computing Companies the AI Bot Thinks Have 10X Potential – InvestorPlace

In a world continually reshaped by technology, cloud computing stands as a pivotal force driving transformation. With its rapid ascent, early investors in cloud computing stocks have seen their investments significantly outperform the S&P 500. This serves as a highlight to the sectors explosive growth and its vital impact on business and consumer landscapes.

2024 shouldnt be any different, which is why, in seizing this momentum, I turned to ChatGPT, initiating my research on the top cloud computing picks with a precise ask.

Kindly conduct an in-depth exploration of the current dynamics and trends characterizing the United States stock market as of February 2024.

I proceeded with a targeted request to unearth gems within the cloud computing arena.

Based on this, suggest three cloud computing stocks that have 10 times potential.

The crucial insights provided by ChatGPT lay the foundation for our piece covering the three cloud computing stocks pinpointed by AI as top contenders poised to deliver stellar returns.

Source: Karol Ciesluk / Shutterstock.com

Datadog Inc. (NASDAQ:DDOG) has emerged as a stalwart in the observability and security platform sector for cloud applications. It witnessed an impressive 61.76% stock surge in the past year and currently trades at $134.91.

Further, the companys third quarter 2023 financial report underscores its robust performance. It showed a 25% year-over-year (YOY) revenue growth, reaching $547.5 million. Additionally compelling is the significant uptick in customers from 22,200 to 26,800. This signals the firms efficiency in expanding its client base and driving revenue.

Simultaneously, Datadog generative artificial intelligence (AI) and large language models (LLMs) foresee potential growth in cloud workloads. AI-related usage comprised 2.5% of third-quarter annual recurring revenue. This resonates notably with next-gen AI-native customers and positions the company for sustained growth in this dynamic landscape.

The projected $568 million revenue for the fourth quarter of 2024 reflects a commitment to sustained expansion. Also, it underlines the companys ability to adapt to market dynamics and capitalize on emerging opportunities.

Source: Sundry Photography / Shutterstock.com

Zscaler, Inc. (NASDAQ:ZS) is a pioneer in providing cloud-based information security solutions.

The company made a noteworthy shift to 100% renewable energy for its offices and data centers in November 2021. This solidifies its standing as an environmental steward and leader in the market. Also, CEO Jay Chaudhry emphasizes that beyond providing top-notch cybersecurity, Zscalers cloud services contribute to environmental conservation by eliminating the need for on-premises hardware.

Beyond sustainability, Zscaler thrives financially, boasting 7,700 customers, including 468, contributing over $1 million in annual recurring revenue (ARR). In the first quarter, non-GAAP earnings per share exceeded expectations at 67 cents, beating estimates by 18 cents. And, revenue soared to $496.7 million, a remarkable 39.7% YOY bump.

Looking forward, second-quarter guidance forecasts revenue between $505 million and $507 million, indicating a robust 30.5% YOY growth. Also, it has an ambitious target of $2.09 billion to $2.10 billion for the entire fiscal year. Thus, Zscaler attributes its success to a potent combination of technology and financial acumen.

Source: Sundry Photography / Shutterstock.com

Snowflake (NASDAQ:SNOW) stands resilient amid market fluctuations, emerging as a top performer in the cloud stock landscape over the past year.

Moreover, while yet to reach previous all-time highs, its strategic focus on AI integrations has propelled its recent success. Positioned at the intersection of the enduring narrative around AI and the high-interest cloud computing sector, Snowflake captures attention with its forward-looking approach.

Financially, Snowflake demonstrates robust figures with a gross profit margin of 67.09%, signaling financial strength. Additionally, the impressive 40.87% revenue growth significantly outpaces the sector median by 773.93%. This attests to the companys agility in navigating market dynamics.

Peering into the future, Snowflakes fourth-quarter guidance paints a promising picture, with an anticipated product revenue falling between $716 million and $721 million. Elevating the outlook, the fiscal year 2024 projection boldly sets a target of $2.65 billion in product revenue. Therefore, this ambitious trajectory demonstrates Snowflakes adept market navigation, savvy AI integration, and steadfast commitment to robust financial performance.

On the publication date, Muslim Farooque did not have (directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Muslim Farooque is a keen investor and an optimist at heart. A life-long gamer and tech enthusiast, he has a particular affinity for analyzing technology stocks. Muslim holds a bachelors of science degree in applied accounting from Oxford Brookes University.

See the rest here:

ChatGPT Stock Predictions: 3 Cloud Computing Companies the AI Bot Thinks Have 10X Potential - InvestorPlace

The 3 Best Cloud Computing Stocks to Buy in February 2024 – InvestorPlace

These cloud computing stocks can march higher in 2024

Source: Blackboard / Shutterstock

Cloud computing has helped corporations increase productivity and reduce costs. Once a business uses cloud computing, it continues to pay annual fees to keep its digital infrastructure.

Cloud solutions can quickly turn into a companys backbone. Its one of the last costs some companies will think of removing. Firms that operate in the cloud computing industry often benefit from high renewal rates, recurring revenue and the ability to raise prices in the future. Investors can capitalize on the trend with these cloud computing stocks.

Source: Tada Images / Shutterstock.com

Amazon (NASDAQ:AMZN) had a record-breaking Black Friday and optimized its logistics to offer the fastest delivery speeds ever for Amazon Prime members. Over seven billion products arrived at peoples doors on the same or the next day or the order. Its a testament to Amazons vast same-day delivery network that encompasses 110 U.S. metro areas and more than 55 dedicated same-day sites across the United States.

The delivery network makes Amazon Prime more enticing for current members and people on the fence. The companys efforts paid off and resulted in 14% year-over-year (YoY) revenue growth in the fourth quarter of 2023.

Amazons ventures into artificial intelligence (AI) can also lead to meaningful stock appreciation. The companys generative AI investments have paid off and strengthened Amazon Web Services value proposition. Developers can easilyscale AI appswith Amazons Bedrock. These resources can help corporations increase productivity and generate more sales.

Innovations like these will help Amazon generate more traction for its e-commerce and cloud computing segments. The AI sector has many tailwinds that can help Amazon stock march higher for long-term investors.

Source: IgorGolovniov / Shutterstock.com

Alphabet (NASDAQ:GOOG, NASDAQ:GOOGL) is a staple in many funds. The equity has outperformed the broader market with a 58% gain over the past year. Shares are up by 170% over the past five years.

Shares trade at a reasonable 22x forward P/E ratio. The stock initially lost some value after earnings but has parried some of its losses. The earnings report wasnt too bad, with 13% YoY revenue growth and 52% YoY net income growth.

Investors may have wanted higher numbers since Meta Platforms (NASDAQ:META) reported better results. However, a 7% drop in earnings didnt make much sense. The business model is still robust and is accelerating revenue and earnings growth. Alphabet also has a lengthy history of rewarding long-term investors.

Many analysts believe the equity looks like a solid long-term buy. The average price target implies a 9% upside. The highest price target of $175 per share suggests the equity can rally 16.5% from current levels.

Source: Sundry Photography / Shutterstock.com

ServiceNow (NYSE:NOW) is an information technology company with an advanced cloud platform that helps corporations increase their productivity and sales. The equity has comfortably outperformed the market with 1-year and 5-year gains of 77% and 248%, respectively.

The company currently trades at a 61x forward P/E ratio, meaning youll need a long-term outlook to justify the valuation. ServiceNow certainly delivers on the financial front, increasing revenue by 26% YoY in Q4 2023. ServiceNow also reported $295 million in GAAP net income, a 97% YoY improvement. The company generated $150 million in GAAP net income during the same period last year.

Revenue is going up, and profit margins are accelerating. These are two promising signs for a company that boasts a 99% renewal rate for its core product. The companys subscription revenue continues to grow at a fast clip and generates predictable annual recurring revenue.

On this date of publication, Marc Guberti held a long position in NOW. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Marc Guberti is a finance freelance writer at InvestorPlace.com who hosts the Breakthrough Success Podcast. He has contributed to several publications, including the U.S. News & World Report, Benzinga, and Joy Wallet.

Read the original:

The 3 Best Cloud Computing Stocks to Buy in February 2024 - InvestorPlace

8 Key Features of Cloud Computing You Shouldn’t Miss – Techopedia

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

See original here:

8 Key Features of Cloud Computing You Shouldn't Miss - Techopedia

Ex VR/AR lead at Unity joins new spatial computing cloud platform to enable the open metaverse at scale, AI, Web3 – Cointelegraph

The metaverse is reshaping the digital world and entertainment landscape. Ozones platform empowers businesses to create, launch and profit from various 3D projects, ranging from simple galleries or meetup spaces to AAA games and complex 3D simulations, transforming how we engage with immersive content in the spatial computing era.

Apples Vision OS launch is catalyzing mainstream adoption of interactive spatial content, opening new horizons for businesses. 95% of business leaders anticipate a positive impact from the metaverse within the next five to ten years, potentially establishing a $5 trillion market by 2030.

Ozone cloud platform has the potential to become the leading spatial computing cloud. Source: Ozone

The future of 3D technology seamlessly blends the virtual and physical realms using spatial computing technology. But, spatial computing can be challenging, especially when the tools are limited and the methods for creating 3D experiences are outdated.

A well-known venture capital firm, a16z, recently pointed out that its time to change how game engines are used for spatial computing, describing the future of 3D engines as a cloud-based 3D creation Engine and this is exactly what the Ozone platform is.

The Ozone platform is a robust cloud computing cloud for 3D applications. Source: Ozone

The platforms OZONE token is an innovative implementation of crypto at a software-as-a-service (SaaS) platform level. You can think of the OZONE token as the core platform token that will unlock higher levels of spatial and AI computing over time, fully deployed and interoperating throughout worlds powered by our cloud.

Ozone is fully multichain and cross-chain, meaning it supports all wallets, blockchains, NFT collections and cryptocurrencies and already integrated several in the web studio builder with full interoperability across spatial experiences said Jay Essadki, executive director for Ozone.

Ozone Studios already integrated and validated spatial computing cross-chain interoperability. Source: Ozone Studio

He added, You can think of the Ozone composable spatial computing cloud as an operating system, or as a development environment. It continuously evolves by integrating new technologies and services.

The OZONE token, positioned as the currency of choice, offers not just discounts and commercial benefits but also, through the integration with platform oracles and cross-chain listings, enables the first comprehensive horizontally and vertically integrated Web3 ecosystem for the metaverse and spatial computing era.

Ozone eliminates technical restrictions and makes spatial computing, Web3 and AI strategies accessible to organizations looking to explore the potential of the metaverse with almost no technical overhead or debt.

Ozone is coming out of stealth with a cloud infrastructure supported by AI and Web3 microservices and is expanding its executive, engineering and advisory teams as it raises more capital in view to replace legacy game engines such as Unreal or Unity.

At the same time, Ozone provides full support for those engines created assets to be deployed on the Ozone platform across Web2 and Web3 alike.

Also Ozone is on a roll of enterprise and government discussions and has been establishing and closing enterprise and government customer relationships in view of initial cloud infrastructure deployment.

Ozone welcomes new advisoers as the platform comes out of stealth.

Ozones new 2024 advisors to make the open metaverse happen:

Ozone will finalize a full game engine based on fully integrated micro-templates that will make the build and deployment of all games and 3D spatial computing as simple as clicking a few buttons, and it is already working.

The upcoming features on the Ozone 3D Web Studio. Source: Ozone

Ozone is announcing a new suite of templatized games. With multi-AI integration, three completed games (Quest, Hide and Seek and RPG, coming in 2024) and more are underway.

It opens up the way to building interactive 3D experiences in a new way.

Ozone helps companies to build and share 3D experiences. Source: Ozone

At the heart of Ozone is the innovative Studio 3D development platform, complemented by a marketplace infrastructure to support e-commerce and the economy.

Ozones SaaS platform empowers businesses to create, deploy and monetize Spatial Computing experiences at scale for Web3 or traditional e-commerce applications. The platforms features, including social infrastructure, AI integration and gamification elements, enhance the interactive aspect of 3D experiences, digital twins and spatial data automation, while providing full interoperability and portability of content and data across experiences and across devices

Ozones vision of becoming the industry standard for interactive 3D development, with compatibility across devices and accessibility from any device, positions it as a catalyst for innovation in media and entertainment. Ozone is set to play a key role in shaping the future of immersive spatial web experiences.

Ozone has secured investments from prominent Web3 VC funds and is opening its first-ever VC equity financing round.

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you with all important information that we could obtain in this sponsored article, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor can this article be considered as investment advice.

View post:

Ex VR/AR lead at Unity joins new spatial computing cloud platform to enable the open metaverse at scale, AI, Web3 - Cointelegraph

Get Rich Quick With These 3 Cloud Computing Stocks to Buy Now – InvestorPlace

As part of our day-to-day life, cloud computing companies are completely necessary as they keep us interconnected and take care of streamlining our operations, allowing us to be more efficient and effective. They also make many tasks much easier to perform through their great technological solutions. These solutions can be applied from the financial area to the human resources area.

If you want to take advantage of the great boom and the strong demand of these companies, here are three cloud computing stocks to buy quick and that you can consider adding to your portfolio.

Source: IgorGolovniov / Shutterstock.com

Behind pharmaceutical companies and biotech companies there is a big figure that is responsible for providing them with cloud-based software solutions to streamline their entire operations, that big figure is Veeva Systems Inc (NYSE:VEEV).

Financially VEEV is completely stable and are always on the move. Its revenues speak for themselves as they are on the rise and if we focus on net income, it is growing consistently reflected in their market performance.

One of the particularities that distinguishes this company is its capacity for innovation.

For example, their most recent release, the Veeva Compass Suite, is a comprehensive set of tools that gives healthcare companies a much deeper understanding of existing patient populations and a picture of healthcare provider behaviors.

Its practically like giving you a complete and specific picture of the entire healthcare network landscape.

On top of that, they make a real impact on the lives of patients, as their training solutions are helping many companies modernize their employee qualification processes.

Source: Sundry Photography / Shutterstock.com

Next on the list of companies involved in the cloud computing sector is Workday Inc (NASDAQ:WDAY), which specializes in providing companies with cloud-based enterprise applications for financial management and human resources.

They provide practical software-based solutions that allow companies to streamline their processes in managing their financial operations and human talent.

One of the things that makes this company completely attractive is its great financial performance, since in their last financial quarter they indicated that their revenues increased by 16.7% compared to the same period of the previous year, which can be translated into $1.87 billion, what good figures.

As part of their most important metrics we have subscription revenues, which increased much stronger than their normal revenues, with 18.1%, reaching approximately $1.69 billion.

In addition to these incredible numbers, they are making important strategic alliances, where they have partnered with McLaren Racing to provide them with innovative solutions.

This partnership demonstrates the versatility of Workday, as they not only provide business solutions in traditional sectors, but they also have a large participation in completely competitive industries.

Source: Jonathan Weiss / Shutterstock.com

And to close the list of these companies completely necessary in our day to day, we have the giant Oracle Corporation (NYSE:ORCL), a technology company completely recognized worldwide.

This company specializes entirely in data management solutions and of course in cloud computing. One of its main commitments is to help organizations improve their efficiency and optimize their operations through completely innovative technological solutions.

Financially, this company is in a phase of solid growth specifically in its total revenue and in its cloud division.

One of the stars of this company is its cloud application suite, which has gained a strong foothold in the healthcare sector.

Large and important institutions such as Baptist Health Care and the University of Chicago Medicine, are adopting the solutions provided by this company to improve their experience with employees and of course the care of their patients.

In addition, they are expanding their global presence with the grand opening of a new cloud region in Nairobi, Kenya. This major expansion makes clear their important commitment to economic and technological development in the greater African continent.

Oracle Cloud Infrastructures (OCI) unique infrastructure allows them the great opportunity and advantage to offer governments and businesses the opportunity to drive innovation and growth in the region.

As of this writing, Gabriel Osorio-Mazzilli did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines(no position)

Gabriel Osorio is a former Goldman Sachs and Citigroup employee. He possesses discipline in bottom-up value investing and volatility-based long/short equities trading.

Read more here:

Get Rich Quick With These 3 Cloud Computing Stocks to Buy Now - InvestorPlace

Leveraging Cloud Computing and Data Analytics for Businesses – Analytics Insight

In todays dynamic business landscape, organizations are constantly seeking innovative ways to drive efficiency, agility, and value. Among the transformative technologies reshaping business operations, cloud computing and data analytics stand out as powerful tools that, when leveraged effectively, can yield significant business value. By integrating these technologies strategically, businesses can unlock new opportunities for growth, streamline operations, and gain a competitive edge in the market.

Cloud computing offers organizations the flexibility to access computing resources on-demand, without the need for substantial investments in hardware and software infrastructure. This agility enables businesses to scale their operations rapidly in response to changing market demands, without the constraints of traditional IT environments. By migrating workloads to the cloud, organizations can streamline their operations, reduce downtime, and optimize resource utilization, leading to improved efficiency across the board.

In todays data-driven world, businesses are sitting on a goldmine of valuable information. Data analytics empowers organizations to extract actionable insights from vast volumes of data, enabling informed decision-making and driving business value. By leveraging advanced analytics techniques, such as machine learning and predictive modeling, businesses can identify trends, anticipate customer needs, and optimize processes for maximum efficiency. Furthermore, effective data governance and quality assurance practices ensure that insights derived from data analytics are accurate, reliable, and actionable.

Cloud FinOps, a practice focused on optimizing cloud spending and maximizing business value, plays a crucial role in ensuring that cloud investments deliver tangible returns. By tracking key performance indicators (KPIs) and measuring the business impact of cloud transformations, organizations can quantify the value derived from their cloud investments. Cloud FinOps goes beyond cost savings to encompass broader metrics such as improved resiliency, innovation, and operational efficiency, providing a comprehensive view of the business value generated by cloud initiatives.

Cloud computing infrastructure provides organizations with the foundation they need to harness the power of data analytics at scale. By leveraging cloud-based platforms for big data processing and analytics, organizations can access virtually unlimited computing resources, enabling them to analyze large datasets quickly and efficiently. Additionally, cloud infrastructure offers built-in features for data protection, disaster recovery, and security, ensuring that sensitive information remains safe and secure at all times. Furthermore, the pay-as-you-go pricing model of cloud services allows organizations to optimize costs and maximize ROI on their infrastructure investments.

Cloud computing accelerates the pace of software development by providing developers with access to scalable resources and flexible development environments. By leveraging cloud-based tools and platforms, organizations can streamline the software development lifecycle, reduce time-to-market, and improve collaboration among development teams. Furthermore, cloud-based development environments enable developers to experiment with new ideas and technologies without the constraints of traditional IT infrastructure, fostering innovation and driving business growth.

In conclusion, cloud computing and data analytics represent powerful tools for driving business value in todays digital economy. By embracing these technologies and implementing sound strategies for their deployment, organizations can unlock new opportunities for growth, enhance operational efficiency, and gain a competitive edge in the market. With the right approach, cloud computing and data analytics can serve as catalysts for innovation and transformation, enabling businesses to thrive in an increasingly data-driven world.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

Go here to read the rest:

Leveraging Cloud Computing and Data Analytics for Businesses - Analytics Insight

Cloud-Computing in the Post-Serverless Era: Current Trends and Beyond – InfoQ.com

Key Takeaways

[Note: The opinions and predictions in this article are those of the author and not of InfoQ.]

As AWS Lambda approaches its 10th anniversary this year, serverless computing expands beyond just Function as a Service (FaaS). Today, serverless describes cloud services that require no manual provisioning, offer on-demand auto-scaling, and use consumption-based pricing. This shift is part of a broader evolution in cloud computing, with serverless technology continuously transforming. This article focuses on the future beyond serverless, exploring how the cloud landscape will evolve beyond current hyperscaler models and its impact on developers and operations teams. I will examine the top three trends shaping this evolution.

In software development, a "module" or "component" typically refers to a self-contained unit of software that performs a cohesive set of actions. This concept corresponds elegantly to the microservice architecture that typically runs on long-running compute services such as Virtual Machines (VMs) or a container service. AWS EC2, one of the first widely accessible cloud computing services, offered scalable VMs. Introducing such scalable, accessible cloud resources provided the infrastructure necessary for microservices architecture to become practical and widespread. This shift led to decomposing monolithic applications into independently deployable microservice units.

Lets continue with this analogy of software units. A function is a block of code that encapsulates a sequence of statements performing a single task with defined input and output. This unit of code nicely corresponds to the FaaS execution model. The concept of FaaS executing code in response to events without the need to manage infrastructure existed before AWS Lambda but lacked broad implementation and recognition.

The concept of FaaS, which involves executing code in response to events without the need for managing infrastructure, was already suggested by services like Google App Engine, Azure WebJobs, IronWorker, and AWS Elastic Beanstalk before AWS Lambda brought it into the mainstream. Lambda, emerging as the first major commercial implementation of FaaS, acted as a catalyst for its popularity by easing the deployment process for developers. This advancement led to the transformation of microservices into smaller, individually scalable, event-driven operations.

In the evolution toward smaller software units offered as a service, one might wonder if we will see basic programming elements like expressions or statements as a service (such as int x = a + b;). The progression, however, steers away from this path. Instead, we are witnessing the minimization and eventual replacement of functions by configurable cloud constructs. Constructs in software development, encompassing elements like conditionals (if-else, switch statements), loops (for, while), exception handling (try-catch-finally), or user-defined data structures, are instrumental in controlling program flow or managing complex data types. In cloud services, constructs align with capabilities that enable the composition of distributed applications, interlinking software modules such as microservices and functions, and managing data flow between them.

Cloud construct replacing functions, replacing microservices, replacing monolithic applications

While you might have previously used a function to filter, route, batch, split events, or call another cloud service or function, now these operations and more can be done with less code in your functions, or in many cases with no function code at all. They can be replaced by configurable cloud constructs that are part of the cloud services. Lets look at a few concrete examples from AWS to demonstrate this transition from Lambda function code to cloud constructs:

These are just a few examples of application code constructs becoming serverless cloud constructs. Rather than validating input values in a function with if-else logic, you can validate the inputs through configuration. Rather than routing events with a case or switch statement to invoke other code from within a function, you can define routing logic declaratively outside the function. Events can be triggered from data sources on data change, batched, or split without a repetition construct, such as a for or while loop.

Events can be validated, transformed, batched, routed, filtered, and enriched without a function. Failures were handled and directed to DLQs and back without a try-catch code, and successful completion was directed to other functions and service endpoints. Moving these constructs from application code into construct configuration reduces application code size or removes it, eliminating the need for security patching and any maintenance.

A primitive and a construct in programming have distinct meanings and roles. A primitive is a basic data type inherently part of a programming language. It embodies a basic value, such as an integer, float, boolean, or character, and does not comprise other types. Mirroring this concept, the cloud - just like a giant programming runtime - is evolving from infrastructure primitives like network load balancers, virtual machines, file storage, and databases to more refined and configurable cloud constructs.

Like programming constructs, these cloud constructs orchestrate distributed application interactions and manage complex data flows. However, these constructs are not isolated cloud services; there isnt a standalone "filtering as a service" or "event emitter as service." There are no "Constructs as a Service," but they are increasingly essential features of core cloud primitives such as gateways, data stores, message brokers, and function runtimes.

This evolution reduces application code complexity and, in many cases, eliminates the need for custom functions. This shift from FaaS to NoFaaS (no fuss, implying simplicity) is just beginning, with insightful talks and code examples on GitHub. Next, I will explore the emergence of construct-rich cloud services within vertical multi-cloud services.

In the post-serverless cloud era, its no longer enough to offer highly scalable cloud primitives like compute for containers and functions, or storage services such as key/value stores, event stores, relational databases, or networking primitives like load balancers. Post-serverless cloud services must be rich in developer constructs and offload much of the application plumbing. This goes beyond hyperscaling a generic cloud service for a broad user base; it involves deep specialization and exposing advanced constructs to more demanding users.

Hyperscalers like AWS, Azure, GCP, and others, with their vast range of services and extensive user bases, are well-positioned to identify new user needs and constructs. However, providing these more granular developer constructs results in increased complexity. Each new construct in every service requires a deep learning curve with its specifics for effective utilization. Thus, in the post-serverless era, we will observe the rise of vertical multi-cloud services that excel in one area. This shift represents a move toward hyperspecialization of cloud services.

Consider Confluent Cloud as an example. While all major hyperscalers (AWS, Azure, GCP, etc.) offer Kafka services, none match the developer experience and constructs provided by Confluent Cloud. With its Kafka brokers, numerous Kafka connectors, integrated schema registry, Flink processing, data governance, tracing, and message browser, Confluent Cloud delivers the most construct-rich and specialized Kafka service, surpassing what hyperscalers offer.

This trend is not isolated; numerous examples include MongoDB Atlas versus DocumentDB, GitLab versus CodeCommit, DataBricks versus EMR, RedisLabs versus ElasticCache, etc. Beyond established cloud companies, a new wave of startups is emerging, focusing on a single multi-cloud primitive (like specialized compute, storage, networking, build-pipeline, monitoring, etc.) and enriching it with developer constructs to offer a unique value proposition. Here are some cloud services hyperspecializing in a single open-source technology, aiming to provide a construct-rich experience and attract users away from hyperscalers:

This list represents a fraction of a growing ecosystem of hyperspecialized vertical multi-cloud services built atop core cloud primitives offered by hyperscalers. They compete by providing a comprehensive set of programmable constructs and an enhanced developer experience.

Serverless cloud services hyperspecializing in one thing with rich developer constructs

Once this transition is completed, bare-bones cloud services without rich constructs, even serverless ones, will seem like outdated on-premise software. A storage service must stream changes like DynamoDB; a message broker should include EventBridge-like constructs for event-driven routing, filtering, and endpoint invocation with retries and DLQs; a pub/sub system should offer message batching, splitting, filtering, transforming, and enriching.

Ultimately, while hyperscalers expand horizontally with an increasing array of services, hyperspecializers grow vertically, offering a single, best-in-class service enriched with constructs, forming an ecosystem of vertical multi-cloud services. The future of cloud service competition will pivot from infrastructure primitives to a duo of core cloud primitives and developer-centric constructs.

Cloud constructs increasingly blur the boundaries between application and infrastructure responsibilities. The next evolution is the "shift left" of cloud automation, integrating application and automation codes in terms of tools and responsibilities. Lets examine how this transition is unfolding.

The first generation of cloud infrastructure management was defined by Infrastructure as Code (IaC), a pattern that emerged to simplify the provisioning and management of infrastructure. This approach is built on the trends set by the commoditization of virtualization in cloud computing.

The initial IaC tools introduced new domain-specific languages (DSL) dedicated to creating, configuring, and managing cloud resources in a repeatable manner. Tools like Chef, Ansible, Puppet, and Terraform led this phase. These tools, leveraging declarative languages, allowed operation teams to define the infrastructures desired state in code, abstracting underlying complexities.

However, as the cloud landscape transitions from low-level coarse-grained infrastructure to more developer-centric programmable finer-grained constructs, a trend toward using existing general-purpose programming languages for defining these constructs is emerging. New entrants like Pulumi and the AWS Cloud Development Kit (CDK) are at the forefront of this wave, supporting languages such as TypeScript, Python, C#, Go, and Java.

The shift to general-purpose languages is driven by the need to overcome the limitations of declarative languages, which lack expressiveness and flexibility for programmatically defining cloud constructs, and by the shift-left of configuring cloud constructs responsibilities from operations to developers. Unlike the static nature of declarative languages suited for low-level static infrastructure, general-purpose languages enable developers to define dynamic, logic-driven cloud constructs, achieving a closer alignment with application code.

Shifting-left of application composition from infrastructure to developer teams

The post-serverless cloud developers need to implement business logic by creating functions and microservices but also compose them together using programmable cloud constructs. This shapes a broader set of developer responsibilities to develop and compose cloud applications. For example, a code with business logic in a Lambda function would also need routing, filtering, and request transformation configurations in API Gateway.

Another Lambda function may need DynamoDB streaming configuration to stream specific data changes, EventBridge routing, filtering, and enrichment configurations.

A third application may have most of its orchestration logic expressed as a StepFunction where the Lambda code is only a small task. A developer, not a platform engineer or Ops member, can compose these units of code together. Tools such as Pulumi, AWS SDK, and others that enable a developer to use the languages of their choice to implement a function and use the same language to compose its interaction with the cloud environment are best suited for this era.

Platform teams still can use declarative languages, such as Terraform, to govern, secure, monitor, and enable teams in the cloud environments, but developer-focused constructs, combined with developer-focused cloud automation languages, will shift left the cloud constructs and make developer self-service in the cloud a reality.

The transition from DSL to general-purpose languages marks a significant milestone in the evolution of IaC. It acknowledges the transition of application code into cloud constructs, which often require a deeper developer control of the resources for application needs. This shift represents a maturation of IaC tools, which now need to cater to a broader spectrum of infrastructure orchestration needs, paving the way for more sophisticated, higher-level abstractions and tools.

The journey of infrastructure management will see a shift from static configurations to a more dynamic, code-driven approach. This evolution hasnt stopped at Infrastructure as Code; it is transcending into a more nuanced realm known as Composition as Code. This paradigm further blurs the lines between application code and infrastructure, leading to more streamlined, efficient, and developer-friendly practices.

In summarizing the trends and their reinforcing effects, were observing an increasing integration of programming constructs into cloud services. Every compute service will integrate CI/CD pipelines; databases will provide HTTP access from the edge and emit change events; message brokers will enhance capabilities with filtering, routing, idempotency, transformations, DLQs, etc.

Infrastructure services are evolving into serverless APIs, infrastructure inferred from code (IfC), framework-defined infrastructure, or explicitly composed by developers (CaC). This evolution leads to smaller functions and sometimes to NoFaaS pattern, paving the way for hyperspecialized, developer-first vertical multi-cloud services. These services will offer infrastructure as programmable APIs, enabling developers to seamlessly merge their applications using their preferred programming language.

The shift-left of application composition using cloud services will increasingly blend with application programming, transforming microservices from an architectural style to an organizational one. A microservice will no longer be just a single deployment unit or process boundary but a composition of functions, containers, and cloud constructs, all implemented and glued together in a single language chosen by the developer. The future is shaping to be hyperspecialized and focused on the developer-first cloud.

Follow this link:

Cloud-Computing in the Post-Serverless Era: Current Trends and Beyond - InfoQ.com

Cloud Computing Security Start with a ‘North Star’ – ITPro Today

Cloud computing has followed a similar journey to other introductions of popular technology: Adopt first, secure later. Cloud transformation has largely been enabled by IT functions at the request of the business, with security functions often taking a backseat. In some organizations, this has been due to politics and blind faith in the cloud services providers (CSPs), e.g., AWS, Microsoft, and GCP.

In others, it has been because security functions only knew and understood on-premises deployments and simply didn't have the knowledge and capability to securely adapt to cloud or hybrid architectures and translate policies and processes to the cloud. For lucky organizations, this has only led to stalled migrations while the security and IT organizations played catch up. For unlucky organizations, this has led to breaches, business disruption, and loss of data.

Related: What Is Cloud Security?

Cloud security can be complex. However, more often than not, it is ridiculously simple the misconfigured S3 bucket being a prime example. It reached a point where malefactors could simply look for misconfigured S3 buckets to steal data; no need to launch an actual attack.

It's time for organizations take a step back and improve cloud security, and the best way to do this is to put security at the core of cloud transformations, rather than adopting the technology first and asking security questions later. Here are four steps to course correct and implement a security-centric cloud strategy:

Related: Cloud Computing Predictions 2024: What to Expect From FinOps, AI

For multi-cloud users, there is one other aspect of cloud security to consider. Most CSPs are separate businesses, and their services don't work with other CSPs. So, rather than functioning like internet service providers (ISPs) where one provider lets you access the entire internet, not just the sites that the ISP owns CSPs operate in silos, with limited interoperability with their counterparts (e.g., AWS can't manage Azure workloads, security, and services, and vice versa). This is problematic for customers because, once more than one cloud provider is added to the infrastructure, the efficacy in managing cloud operations and cloud security starts to diminish rapidly. Each time another CSP is added to an organization's environment, their attack surface grows exponentially, unless secured appropriately.

It's up to each company to take steps to become more secure in multi-cloud environments. In addition to developing and executing a strong security strategy, they also must consider using third-party applications and platforms such as cloud-native application protection platforms (CNAPPs), cloud security posture management (CSPM), infrastructure as code (IaC), and secrets management to provide the connective tissue between CSPs in hybrid or multi-cloud environments. Taking this vital step will increase security visibility, posture management, and operational efficiency to ensure the security and business results outlined at the start of the cloud security journey.

It should be noted that a cloud security strategy like any other form of security needs to be a "living" plan. The threat landscape and business needs change so fast that what is helpful today may not be helpful tomorrow. To stay in step with your organization's desired state of security, periodically revisit cloud security strategies to understand if they are delivering the desired benefits and make adjustments when they are not.

Cloud computing has transformed organizations of all types. Adopting a strategy for securing this new environment will not only allow security to catch up to technology adoption, it will also dramatically improve the ROI of cloud computing.

Ed Lewis is Secure Cloud Transformation Leader at Optiv.

Read this article:

Cloud Computing Security Start with a 'North Star' - ITPro Today

Global $83.7 Bn Cloud Computing Management and Optimization Market to 2030 with IT and Telecommunications … – PR Newswire

DUBLIN, Jan. 23, 2024 /PRNewswire/ -- The"Global Cloud Computing Management and Optimization Market 2023 - 2030 by Types, Applications - Partner & Customer Ecosystem Competitive Index & Regional Footprints" report has been added to ResearchAndMarkets.com's offering.

The Cloud Computing Management and Optimization Market size is estimated to grow from USD 17.6 Billion in 2022 to reach USD 83.7 Billion by 2030, growing at a CAGR of 21.7% during the forecast period from 2023 to 2030.

The Adoption of Cloud Based Solution Is Drive the Cloud Computing Management and Optimization Market Growth

As businesses migrate their operations to cloud-based ecosystems, as it offers a number of benefits, such as scalability, flexibility, and cost savings. A growing number of companies are adopting cloud computing includingSMEs and Large scale companies, which will lead to an increase in demand for cloud computing management and optimisation solutions.

Cloud computing environments are becoming increasingly complex, as businesses adopt a variety of cloud services from different providers. This complexity can make it difficult for businesses to manage their cloud costs and performance. Cloud computing management and optimization solutions can help businesses to simplify their cloud environments and optimize their costs and performance. Cloud computing can be a cost-effective way for businesses to IT resources.

However, businesses can still incur significant costs if they do not manage their cloud usage effectively. Cloud computing management and optimization solutions can help businesses to track their cloud usage and identify opportunities to optimize their costs. The cloud computing industry is constantly evolving, with the emergence of new technologies, such as artificial intelligence and machine learning. These new technologies can be used to improve the efficiency and effectiveness of cloud computing management and optimization solutions.

The IT and Telecommunications industries hold the highest market share in the Cloud Computing Management and Optimization Market

The IT and Telecommunications industries hold the highest market share in the Cloud Computing Management and Optimization Market in 2022, due to their intrinsic reliance on advanced technology solutions and their pivotal role in driving digital transformation across various sectors. In the IT industry, cloud computing has become a cornerstone for delivering software, platforms, and infrastructure services, enabling organizations to enhance agility, scalability, and operational efficiency.

As IT companies transition their operations to the cloud, the need for effective management and optimization of cloud resources becomes paramount to ensure optimal performance, cost control, and resource allocation. Cloud management and optimization solutions enable IT enterprises to streamline provisioning, monitor workloads, automate processes, and maintain stringent security protocols.

Furthermore, the Telecommunications sector has embraced cloud computing to modernize and expand its network infrastructure, offer innovative communication services, and adapt to the demands of an interconnected world. Cloud-based solutions empower telecom companies to efficiently manage network resources, deliver seamless customer experiences, and explore new revenue streams.

In this context, cloud computing management and optimization are essential for maintaining network reliability, ensuring data privacy, and dynamically scaling resources to meet fluctuating demand. The complex and dynamic nature of both IT and Telecommunications operations necessitates sophisticated tools and strategies for cloud resource management, making these industries prime contributors to the Cloud Computing Management and Optimization Market

Regional Insight: North America dominated the Cloud Computing Management and Optimization Market during the forecast period.

North America dominated the Cloud Computing Management and Optimization Market during the forecast period. Cloud computing has been continuously adopted by the United States and Canada, which are at the forefront of technological development, which helps strengthen North America's remarkable position as market leader. The strong presence of major companies like Adobe, Salesforce, Oracle,AWS, Google, and IBM inside the region's wide geography provides a foundation for this rise. With their cutting-edge solutions, these major players make a significant impact on adoption and innovation.

The region's commitment to technical advancement also serves as another indication of its dominance. Continuous improvements in a number of technologies are transforming the cloud computing industry, and North America is recognized as a hub for important developments.

As a result, organizations and enterprises in North America are pushed to the forefront of cloud optimization and administration, utilizing the full range of technologies and expertise provided by both local and international industry experts. Strong vendor presence, widespread acceptance, and constant technological innovation place North America in the lead for snatching the highest market share during the forecast period.

Major Classifications are as follows:

Cloud Computing Management and Optimization Market, Type of Solutions

Cloud Computing Management and Optimization Market, By Deployment Models

Cloud Computing Management and Optimization Market, By Organization Size

Cloud Computing Management and Optimization Market, By Cloud Service Models

Cloud Computing Management and Optimization Market, By Technologies

Cloud Computing Management and Optimization Market, By Industries

Cloud Computing Management and Optimization Market, By Geography

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/bx3846

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected] For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900 U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

Logo: https://mma.prnewswire.com/media/539438/Research_and_Markets_Logo.jpg

SOURCE Research and Markets

Read this article:

Global $83.7 Bn Cloud Computing Management and Optimization Market to 2030 with IT and Telecommunications ... - PR Newswire

The Future of Cloud Computing in Business Operations – Data Science Central

The digital era has witnessed the remarkable evolution of cloud computing, transforming it into a cornerstone of modern business operations. This technology, which began as a simple concept of centralized data storage, has now evolved into a complex and dynamic ecosystem, enabling businesses to operate more efficiently and effectively than ever before. The Future of Cloud Computing holds unparalleled potential, promising to revolutionize the way companies operate, innovate, and compete in the global market.

Cloud computing refers to the delivery of various services over the Internet, including data storage, servers, databases, networking, and software. Rather than owning their computing infrastructure or data centers, companies can rent access to anything from applications to storage from a cloud service provider.

Cloud computing has revolutionized the way businesses operate, offering a plethora of advantages that enhance efficiency, flexibility, and scalability. In this discussion, well delve into the key benefits of cloud computing, explaining each in simple terms and underlining their significance in todays business landscape.

Cloud computing significantly cuts down on the capital cost associated with purchasing hardware and software, especially in sectors like the healthcare industry. Its an economical alternative to owning and maintaining extensive IT infrastructure, allowing businesses, including those in the healthcare sector, to save on setup and maintenance costs. This aspect is particularly beneficial in cloud computing in healthcare industry, where resources can instead be allocated toward patient care and medical research.

The ability to scale resources elastically with cloud computing is akin to having a flexible and adaptable IT infrastructure. Businesses can efficiently scale up or down their IT resources based on current demand, ensuring optimal utilization and avoiding wastage.

Cloud services are hosted on a network of secure, high-performance data centers globally, offering superior performance over traditional single corporate data centers. This global network ensures reduced latency, better application performance, and economies of scale.

Cloud computing facilitates a swift and agile business environment. Companies can quickly roll out new applications or resources, empowering them to respond swiftly to market changes and opportunities.

The efficiency and speed offered by cloud computing translate into enhanced productivity. Reduced network latency ensures applications and services run smoothly, enabling teams to achieve more in less time.

Cloud computing enhances collaboration by enabling team members to share and work on data and files simultaneously from any location. This virtual collaboration space is crucial for businesses with remote teams and global operations.

Here, we explore the transformative role of cloud computing in business, focusing on 7 key points that forecast its future impact and potential in streamlining and innovating operational landscapes.

In the Future of Cloud Computing, handling enormous amounts of data will become more critical than ever. Businesses of all sizes generate data at unprecedented rates. From customer interactions to transaction records, every piece of data is a potential goldmine of insights. Cloud computing steps in as the ideal solution to manage this surge efficiently.

Cloud storage provides a scalable and flexible way to store and access vast datasets. As we move forward, cloud providers will likely offer more tailored storage solutions, catering to different business needs. Whether its for high-frequency access or long-term archiving, cloud storage can adapt to various requirements.

Another significant aspect of data management in the Future of Cloud Computing is real-time data processing. Businesses will rely on cloud computing not just for storage, but also for the immediate processing and analysis of data. This capability allows for quicker decision-making, a crucial factor in maintaining a competitive edge.

One of the most transformative impacts of cloud computing is its ability to transcend geographical boundaries. In the Future of Cloud Computing, remote and global teams can collaborate as if they were in the same room. Cloud-based tools and platforms allow team members from different parts of the world to work on projects simultaneously, share files instantaneously, and communicate in real-time.

In the Future of Cloud Computing, we can expect a rise in virtual workspaces. These digital environments simulate physical offices, providing a space where remote workers can feel connected and engaged. They offer features like virtual meeting rooms, shared digital whiteboards, and social areas, replicating the office experience in a digital realm.

Cloud computing does more than just streamline operations; it also opens doors to innovation. With cloud resources, businesses can experiment with new ideas without significant upfront investment in infrastructure. This flexibility encourages creativity and risk-taking, which are essential for innovation.

Cloud computing accelerates the product development cycle. Teams can quickly set up and dismantle test environments, prototype more efficiently, and bring products to market faster. This agility gives businesses a significant advantage in rapidly evolving markets.

The landscape of cloud computing is rapidly evolving, with new trends constantly emerging to redefine how businesses leverage this technology. In the context of the future of cloud computing, 3 key trends stand out for their potential to significantly shape the industry. Understanding these trends is crucial for businesses looking to stay competitive and innovative.

Artificial Intelligence (AI) and Machine Learning (ML) are becoming increasingly integral to cloud computing. This integration is revolutionizing how cloud services are delivered and utilized. AI algorithms are enhancing the efficiency of cloud platforms, offering smarter data analytics, automating routine tasks, and providing more personalized user experiences. For instance, cloud-based AI services can analyze vast amounts of data to predict market trends, customer behavior, or potential system failures, offering invaluable insights for businesses.

This integration not only boosts the performance and scalability of cloud solutions but also opens up new avenues for innovation across various sectors.

As cloud computing becomes more prevalent, the focus on security and compliance is intensifying. The increasing frequency and sophistication of cyber threats make robust cloud security a top priority for businesses. In response, cloud service providers are investing heavily in advanced security measures, such as enhanced encryption techniques, identity and access management (IAM), and AI-powered threat detection systems.

Furthermore, with regulations like GDPR and CCPA in place, compliance has become a critical aspect of cloud services. The future of cloud computing will likely witness a surge in cloud solutions that are not only secure but also compliant with various global and industry-specific regulations. This trend ensures that businesses can confidently and safely leverage the cloud while adhering to legal and ethical standards.

Sustainability is a growing concern in the tech world, and cloud computing is no exception. There is an increasing trend towards green cloud computing, focusing on reducing the environmental impact of cloud services. This involves optimizing data centers for energy efficiency, using renewable energy sources, and implementing more sustainable operational practices.

It will likely see a stronger emphasis on sustainability as businesses and consumers become more environmentally conscious. Cloud providers who prioritize and implement eco-friendly practices will not only contribute to a healthier planet but also appeal to a growing segment of environmentally-aware customers.

The future of cloud computing is bright and offers a plethora of opportunities for businesses to grow and evolve. By staying informed and adapting to these changes, companies can leverage cloud computing to gain a competitive edge in the market.

Remember, the future of cloud computing isnt just about technology; its about how businesses can harness this technology to drive innovation, efficiency, and growth.

For businesses aiming to thrive in the ever-changing digital world, embracing the advancements in cloud computing is not just a choice but a necessity. Staying updated and adaptable will be key to harnessing the power of cloud computing for business success in the years to come.

Originally posted here:

The Future of Cloud Computing in Business Operations - Data Science Central

AWS to invest $15bn in cloud computing in Japan – DatacenterDynamics

Amazon Web Services (AWS) is planning to invest 2.26 trillion yen ($15.24 billion) in expanding its cloud computing infrastructure in Japan by 2027.

As part of this investment, the company will seek to expand its data center facilities in Tokyo and Osaka.

The cloud giant previously invested 1.51 trillion yen (~$10.2bn) between 2011 and 2022 in the country. Yearly, this works out at just under $1bn spent per year. The new announcement will see this increase to more than $5bn a year for the next three years.

"The adoption of digital technology has become a source of a countrys competitiveness, said Takuya Hirai, former digital minister and current chair of headquarters for the promotion of a digital society in Japans Liberal Democratic Party.

The development of digital infrastructure in Japan is key to strengthening the country's industrial competitiveness, and data centers play an important role to this end. It promotes the use of important technologies such as AI [artificial intelligence] and improves the capabilities of research and development in Japan."

The digital infrastructure in the country is also the backbone of AWS' artificial intelligence solutions. AWS provides generative AI services to Japanese customers including Asahi Group, Marubeni, and Nomura Holdings.

AWS first entered Japan in 2009. The company launched its first cloud region in the country in 2011 in Tokyo, and another in Osaka in 2021.

Amazon's Bedrock AI offering was made available in Tokyo in October 2023. The company also invested $100m in a generative AI innovation center in June 2023.

It is currently estimated that the latest investment will contribute 5.57 trillion yen (~$37.6bn) to Japans GDP and support an average of 30,500 full-time jobs in Japanese businesses each year.

Japan's government is seeking to catch up in AI development. Prime Minister Fumio Kishida has met with the heads of OpenAI and Nvidia in the past year to discuss AI regulation and infrastructure.

In December 2023, Minister Ken Saito announced the government would double down on its pledge to support the domestic chip manufacturing industry.

Follow this link:

AWS to invest $15bn in cloud computing in Japan - DatacenterDynamics

Why is Application Mapping Important in Cloud Computing? – Techopedia

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Original post:

Why is Application Mapping Important in Cloud Computing? - Techopedia

Amazon’s AWS to invest $15 billion to expand cloud computing in Japan – Yahoo! Voices

TOKYO (Reuters) - Amazon Web Services (AWS) said on Friday it plans to invest 2.26 trillion yen ($15.24 billion) in Japan by 2027 to expand cloud computing infrastructure that serves as a backbone for artificial technology (AI) services.

The Amazon.com unit is spending to expand facilities in the metropolises of Tokyo and Osaka to meet growing customer demand, it said in a statement.

That comes on top of 1.51 trillion yen spent from 2011 to 2022 to build up cloud capacity in Japan, AWS said. The company offers generative AI services to Japanese corporate customers including Asahi Group, Marubeni and Nomura Holdings, it said.

The investment comes as Japan's government and corporate sector race to catch up in AI development. Prime Minister Fumio Kishida met with the heads of ChatGPT creator OpenAI and advanced chipmaker Nvidia in the past year to discuss AI regulation and infrastructure.

($1 = 148.2700 yen)

(This story has been refiled to add dropped words 'creator OpenAI' after 'ChatGPT', in paragraph 4)

(Reporting by Rocky Swift; Editing by Muralikumar Anantharaman and Christopher Cushing)

Read the original post:

Amazon's AWS to invest $15 billion to expand cloud computing in Japan - Yahoo! Voices

Amazon’s AWS to invest $15 bln to expand cloud computing in Japan – Marketscreener.com

TOKYO, Jan 19 (Reuters) - Amazon Web Services (AWS) said on Friday it plans to invest 2.26 trillion yen ($15.24 billion) in Japan by 2027 to expand its cloud computing infrastructure.

The Amazon.com unit is spending to expand facilities in the metropolises of Tokyo and Osaka to meet growing customer demand, it said in a statement.

That comes on top of 1.51 trillion yen spent from 2011 to 2022 to build up cloud capacity in Japan, AWS said. ($1 = 148.2700 yen) (Reporting by Rocky Swift; Editing by Muralikumar Anantharaman)

Read the original:

Amazon's AWS to invest $15 bln to expand cloud computing in Japan - Marketscreener.com