Page 16«..10..15161718..3040..»

Category Archives: Cloud Computing

Cloud-Native Computing is Good for the Environment – Container Journal

Posted: October 15, 2022 at 4:58 pm

Building cloud-native applications offers many advantages for the modern enterpriseincluding reduced costs, improved efficiency, greater scalability, easier development and simplified support. But did you know that building cloud-native applications is also good for the environment?

Public cloud providerssuch as Amazon Web Services, Microsoft and Googlehave taken over data centers. These three public cloud providers account for over half of the worlds largest data centers. This consolidation of data centers has enabled anotheralbeit lesser-knownadvantage of cloud computing: The greening of the data center.

All three companies are driving toward data center sustainability and environmental responsibilitykey driving forces in the massive build-out of data centers worldwide. AWS alone boasts that its infrastructure is 3.6 times more energy efficient than the median U.S. enterprise data center.

Why are data center companies going green in droves? Because it makes financial sense. Data centers can be located almost anywhere, so locating them near cheap and highly available sustainable energy sources (such as wind, water and solar) means the huge quantities of electricity that power data centers can be acquired more economically. Additionally, using greener energy sources provides huge public relations benefits to public cloud providers.

Therefore, its not just the data centers themselves that are greeneroperating applications in the public cloud requires less energy than operating applications on-premises. Why does an application running in the cloud use less energy than running on-premises? There are several reasons: First, the ability to operate dynamic infrastructures in the public cloud means an application doesnt require numerous servers idling around unused, waiting to handle peak application usage times. This reduces the resources required to run an application. The cloud providers dynamics of scale can provide more intelligent load balancing of resources across a smaller footprint of physical servers. Finally, the centralization of numerous servers means that the economics of scale make using eco-friendly energy sourcessuch as wind and waterfar more financially viable.

The overall result: A typical application can run using substantially less energy in the public cloud than an equivalent on-premises application. According to AWS, moving an application to the cloud can reduce your carbon emissions by 88%.

And the improvements will keep coming. As the major cloud providers continue to expand and innovate, their ability to leverage greener energy options will continue to grow. Google already boasts 100% usage of renewable energy for its data centers.

Plus, data centers, by their very nature, can be located almost anywhere, even underwater. The nature of communications technology means that the speed of communicating with an application in a data center is irrelevant of its physical location. As such, data centers can be located near where inexpensive renewable energy is available. That means data centers can be located near giant wind farms, hydroelectric dams, or large solar arrays. Project Natick, Microsofts offshore renewable energy-powered data center experiment, is a great example of this. So-called dark data centersdata centers that need little or no human contactoffer a great opportunity to use point-of-creation renewable energy sources efficiently and economically.

So, data centers take less energythats great. But how much of an impact does this actually make on worldwide energy usage?

The answer is: Quite a significant impact. According to some estimates, by 2030, more than 20% of all global electricity usage will be for information and communications usage. Already today, data centers account for 1% to 2% of all worldwide energy usage.

Data centers use considerable energy, and their centralized nature means we can apply eco-friendly strategies to reduce their energy usage. The result is a significant impact on worldwide energy usage. So, go ahead and build that cloud-native application. Use more and more cloud computing. After all, its good for the environmentand your bottom line!

Related

Read the original here:

Cloud-Native Computing is Good for the Environment - Container Journal

Posted in Cloud Computing | Comments Off on Cloud-Native Computing is Good for the Environment – Container Journal

Five best practices to drive an effective cloud migration – ETCIO

Posted: at 4:58 pm

By Samit Banerjee

For Enterprises of today and tomorrow, the road to success has been carved with an inevitable pitstop and that is at the Cloud. Keeping up this pace of progress has become elemental to earn a competitive edge in the COVID era and the biggest risk an enterprise could take would be to slow down its digital journey to a crawl. However, while making this transition, many service providers fail to critically map and align cloud journeys to their business strategies, and current priorities, leading to a sore failure of cloud migration. Moreover, if not adopted in the right manner, compliance risks could complicate issues further.

So, what are the best practices and potential risks while adopting the cloud at scale? Making the Cloud a bright success: The best practices

Lightning strikes: The risks of ad-hoc cloud use

While cloud success at scale has positively transformed and redefined ways of working globally, cloud computing collates all the resources needed to develop, test, and launch new applications and services which are only a few clicks away. For large enterprises, especially those in highly regulated industries, this is precisely what introduces the biggest risks. To maintain a check on cloud consumption and eliminate unwanted chaos, proper operational controls and processes can ring-fence enterprises from security and compliance issues, here are some risks of ad-hoc cloud use:

The author is Division President, Amdocs Cloud Operations Services.Disclaimer: The views expressed are solely of the author and ETCIO.com does not necessarily subscribe to it. ETCIO.com shall not be responsible for any damage caused to any person/organization directly or indirectly.

Original post:

Five best practices to drive an effective cloud migration - ETCIO

Posted in Cloud Computing | Comments Off on Five best practices to drive an effective cloud migration – ETCIO

Dr Martens steps towards the cloud – Diginomica

Posted: at 4:58 pm

(Image by Stefan Wiegand from Pixabay )

Dr Martens boots are trusted. Their owners know these iconic boots with their yellow stitching and opaque soles will always be comfortable, enduring and stylish. Having gone through an initial public offering (IPO) in 2021, and with market expansion plans, the IT at Dr Martens needed to be as trusted as the boots being crafted by the shoemakers. A cloud strategy, with managed service provision to keep the cybersecurity and data retention tightly laced, is keeping Dr Martens in step.

Dr Martens was listed on the London Stock Exchange in early 2021 following seven years of ownership by private equity business Permira. Under Permira, Dr Martens moved from being a business where manufacturing thinking led the organization, to a global brand with a CEO from the fashion industry.

The lessons and strategies of the Permira years remain in place and are directing the next steps in the technology direction. Dan Morgan, Global Head of Cloud & Infrastructure, says of the initial public offering (IPO):

That brings with it a host of responsibilities and requirements, particularly in cybersecurity. We have 158 stores in 60 countries and in 2021 revenue was 908 million.

Those responsibilities have led to IT expanding as a department and in importance to the bootmaker, Morgan says:

There are now as many people in IT as there were in the entire company when IT started 18 years ago. There are 4000 employees in total and 270 people in IT.

Way back when, we were a small manufacturing business with one factory and one office hanging off the back of that factory. Now we are a global retail business. So technology is only going to become more and more important.

Originally a family-owned business, Griggs, the founders launched the iconic Dr Martens boot in 1960 and ever since it has been an fashion item for punks and dependable footwear for trades people. The family sold a majority stake to Permira in 2014, who invested in the firm's global retail outlets and online stores, the latter increasing sales during the pandemic. All of this has led to the technology estate and, in particular, security of customer details being a high priority.

The expansion of a network of physical and online stores led to Dr Martens investing in its technology estate and moving the role of technology to the center of the business. As Morgan says:

Because we are on an ambitious growth plan, then we needed a scalable stack and an operating model that would grow with the business.

Inevitably that scalable stack is enterprise cloud computing based, and Morgan's team was mid-way through the modernization programme when he shared his story with diginomica. He says:

We have got a lot of legacy applications in the estate and a lot of bespoke applications that are not necessarily architected in the most modern fashion. As part of our growth curve over the next couple of years, we plan to modernize the application and infrastructure of the organization. We want to reduce our on-premise computing and move to software-as-a-service (SaaS).

The core of the business is now on Microsoft Azure. Morgan says:

The foundations are in place in our Azure estate, and we want to make sure that we are set up in the right way for the years to come. Once that is done, we can then look at enhancing the services piece, so in terms of the migration to the cloud, what we don't want to do is an old-fashioned move of taking our virtual machines out of VMware and popping them into Azure.

Like many companies, we are moving to a more Agile-based approach with product teams. In my team, we are doing a lot of work on the fundamentals of DevOps with the creation of a framework. The secure framework means that the product teams can work within it and be as self-sufficient as possible, and that way they can move quicker, and there is less hand off.

This move to product teams and DevOps is titled Technology Remastered, which Morgan says is a nod towards Dr Martens' historical connection to the music business. The online arm of the business has been the first to adopt DevOps, and Dr Martens has created a test-and-learn approach so that each following area of the business learns from the steps taken by others.

Business growth also increases the likelihood of threats to an organization, something Morgan is tackling as part of the infrastructure modernization at Dr Martens, he says:

We are not immune from cybersecurity threats; we are a world-famous brand; on the upside, people think our brand image is very positive, but we have to plan for the worst. We can't just think no one will attack us because they like our boots. In addition, we have always had our own stores, so we have always held PII data and the growth we are expecting in direct to consumer, that unavoidably increases the amount of data that we are storing.

In addition, as a PLC, there is an increased level of governance:

Once we floated, there was a whole host of extra audit requirements, and it is vital that we are doing the right thing. The secret sauce that makes Dr Martens boots Dr Martens, for example. So there are certain things that are just good governance, and we have to make sure we have all of those nailed down.

Morgan says Dr Martens needed a backup system that would scale to the increasing levels of enterprise cloud computing and business growth but also work with the existing legacy estate. As a result, Dr Martens has chosen a managed services data backup strategy in partnership with 11:11 Systems.

As a managed service, Dr. Martens uses the 11:11 Cloud Backup for Veeam Cloud Connect and the Microsoft 365 backup offering. These provide data protection and retention capabilities for the 4,000 Exchange, SharePoint Online, and OneDrive global users at Dr Martens. Morgan added the Microsoft 365 tool to the service as Microsofts own backup abilities are too short term for the business. He says:

Microsoft works well for general keeping the mailbox alive but typically the Microsoft retention is short term - 30/90 days. We wanted and have permanent storage and retention which is end-to-end encrypted and the benefit to us is that the typical length of time from compromise to discovery is 140 days. So we have that longer term security and as a PLC it is emently likely we will have to go back and get emails from contracts for example.

In addition Morgan was looking for increased cybersecurity protection, he says:

The air gap storage protects against ransomware, which is a big concern.The air gap is only accessible by 11:11 Systems staff, so the inside threat is covered too.

On managed services, Morgan says:

It is a simple per-user cost basis, so it scales as the business grows. One thing for me was that every vendor talks of a single pane of glass; we wanted to make the management of this stuff as light as possible. I don't want my guys sitting there looking at backup logs and consoles; we needed something that just works. You spend a lot of time managing backups hoping that you will never use it, so the as-a-service model works a treat. For me, it is looking at how the IT team is focused on adding value.

And does Morgan wear Dr Martens? He says:

I've got five pairs.

Originally posted here:

Dr Martens steps towards the cloud - Diginomica

Posted in Cloud Computing | Comments Off on Dr Martens steps towards the cloud – Diginomica

Cloud Computing in Cell Biology, Genomics and Drug Development Market size was valued at USD 2.6 Billion in 20 – openPR

Posted: at 4:58 pm

The global Cloud Computing in Cell Biology, Genomics and Drug Development market size was valued at USD 2.6 billion in 2021 growing at the CAGR of 24% from 2022 to 2032. Evolve Business Intelligence provides an in-dept research study that contains the ability to focus on the major market dynamics in several region across the globe. Moreover, a details assessment of the market is conducted by our analysts on various geographic including North America, Europe, Asia Pacific, Latin America, and Middle East & Africa to provide clients with opportunity to dominate the emerging markets. The Cloud Computing in Cell Biology, Genomics and Drug Development market study includes growth factors, restraining factors, challenges, and Opportunities which allows the businesses to assess the market capability of the industry. The report delivers market size from 2020 to 2032 with forecast period of 2022 to 2032. The report also contains revenue, production, sales consumption, pricing trends, and other factors which are essential for assessing any market. Request Free Sample Report or PDF Copy: https://report.evolvebi.com/index.php/sample/request?referer=openpr.com&reportCode=016017

Key Highlights:The global Cloud Computing in Cell Biology, Genomics and Drug Development market size was valued at USD 2.6 billion in 2021 growing at the CAGR of 24% from 2022 to 2032.North America dominated the market in 2021Asia Pacific is expected to grow at a highest CAGR from 2022 to 2032

Key PlayersThe Cloud Computing in Cell Biology, Genomics and Drug Development market report gives comprehensive information about the company and its past performance. The report also provides a detail market share analysis along with product benchmarking with key developments.The key players profiled in the report are:Google Inc.Oracle CorporationAmazon Web Services, Inc.BenchlingIBM Corp.Dell EmcArisglobalMicrosoft Corp.Cisco SystemsCognizant

The Global Cloud Computing in Cell Biology, Genomics and Drug Development report also includes information on company profiles, product descriptions, revenue, market share data, and contact details for several regional, global, and local companies. Due to increased technological innovation, R&D, and M&A operations in the sector, the market is becoming more popular in particular niche sectors. Additionally, a large number of regional and local vendors in the Cloud Computing in Cell Biology, Genomics and Drug Development market provide specialised product offerings according to geographical regions in keeping with the global manufacturing footprint. Due to the reliability, quality, and technological modernity of the worldwide suppliers, it is difficult for the new market entrants to compete.COVID ImpactIn terms of COVID 19 impact, the Cloud Computing in Cell Biology, Genomics and Drug Development market report also includes the following data points:COVID19 Impact on Cloud Computing in Cell Biology, Genomics and Drug Development market sizeEnd-User/Industry/Application Trend, and PreferencesGovernment Policies/Regulatory FrameworkKey Players Strategy to Tackle Negative Impact/Post-COVID StrategiesOpportunity in the Cloud Computing in Cell Biology, Genomics and Drug Development market

Get a Free Sample Copy of This Report @ https://report.evolvebi.com/index.php/sample/request?referer=openpr.com&reportCode=016017

Scope of the Report:Market Segment By Type:oPublic CloudoPrivate CloudoHybrid Cloud

Market Segment By Application:oPharmaceutical and Biotechnology CompaniesoContract Research Organizations (CROs)oClinical LaboratoriesoHospitals and Research InstitutesoOthers

For more information: https://report.evolvebi.com/index.php/sample/request?referer=openpr.com&reportCode=016017

Key Region/ Countries CoveredNorth America (US, Canada, Mexico)Europe (Germany, U.K., France, Italy, Russia, Rest of Europe)Asia-Pacific (China, India, Japan, South Korea, Rest of Asia Pacific)Middle East & Africa (Saudi Arabia, UAE, Egypt, South Africa, and Rest of MEA)Latin America (Mexico, Brazil, Argentina, Rest of Latin America)

Reasons to Buy this Report:Detail analysis of the impact of market drivers, restraints, and opportunitiesCompetitive Intelligence providing the understanding about the ecosystemDetails analysis of Total Addressable Market (TAM) of your productsInvestment Pockets and New Business OpportunitiesDemand-supply gap analysisStrategy Planning

Contact Us:Evolve Business IntelligenceIndiaContact: +1 773 644 5507 (US) / +441163182335 (UK)Email: sales@evolvebi.comWebsite: http://www.evolvebi.com

About EvolveBIEvolve Business Intelligence is a market research, business intelligence, and advisory firm providing innovative solutions to challenging the pain points of a business. Our market research reports include data useful to micro, small, medium, and large-scale enterprises. We provide solutions ranging from mere data collection to business advisory.Evolve Business Intelligence is built on account of technology advancement providing highly accurate data through our in-house AI-modelled data analysis and forecast tool - EvolveBI. This tool tracks real-time data including, quarter performance, annual performance, and recent developments from fortune's global 2000 companies.

This release was published on openPR.

Read more:

Cloud Computing in Cell Biology, Genomics and Drug Development Market size was valued at USD 2.6 Billion in 20 - openPR

Posted in Cloud Computing | Comments Off on Cloud Computing in Cell Biology, Genomics and Drug Development Market size was valued at USD 2.6 Billion in 20 – openPR

Retail Enterprise ICT Investment Market Trends by Budget Allocations (Cloud and Digital Transformation), Future Outlook, Key Business Areas and…

Posted: at 4:58 pm

ReportLinker

Summary Retail Enterprise ICT Investment Market Trends by Budget Allocations (Cloud and Digital Transformation), Future Outlook, Key Business Areas and Challenges, 2022 summarizes key findings from the ICT customer insight survey carried out in H1-2022.

New York, Oct. 13, 2022 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Retail Enterprise ICT Investment Market Trends by Budget Allocations (Cloud and Digital Transformation), Future Outlook, Key Business Areas and Challenges, 2022" - https://www.reportlinker.com/p06328167/?utm_source=GNW It reveals how the overall ICT budgets and their allocations towards various business functions and spending areas, have changed for enterprises in the retail sector in 2022 when compared to 2021.

The report also discusses the change in ICT budget allocations for digital transformation enabling technologies like artificial intelligence (AI), internet of things (IoT), autmation and edge computing among enterprises in 2022 as compared to 2021.It sheds focus on the change in ICT budget allocations across 30 IT hardware, software, and service categories.

The report also gives an indication of ICT opportunities in retail sector with forward-looking insights on enterprise spending priorities for over 100 ICT product and service sub-categories over next two years.

The survey report provides information and insights into ICT spending by enterprises in retail sector -- Insights of its ICT budget allocation by business function and key spending areas- Enterprise ICT budget allocations by type of ICT project- Breakdown of enterprise budget allocation change by digital transformation areas- Segment ICT budget allocation trends- Insights on ICT technology spending priorities of enterprises in retail sector- Enterprise cloud computing investment priority

Scope- According to Information & Communication Technology (ICT) customer insight survey, the majority enterprises in retail sector will see a slight increase in their ICT budgets for 2022 compared to 2021- The survey reveals that most enterprises in retail sector see a slight increase in ICT budgets for internal ICT department and for business functions in 2022 over 2021, with more enterprises seeing an overall budget increase for business functions- Cloud computing budgets across all cloud implementation models seem to have increased in most enterprises in the retail sector in 2022 compared to last year- Most enterprises see a slight increase in their software budget allocations across all spending areas, while for software vendors support/maintenance costs, a substantial number of enterprises also see a mostly unchanged budget allocation

Reasons to Buy- The report is based on IT Customer Insight Survey carried out annually covering key ICT decision makers from enterprises across various industry verticals to understand their ICT investment priorities and trends.- This survey report offers a thorough analysis of Enterprise ICT Investment Trends and how it has changed this year compared to previous year.- The report also presents an analysis of enterprise ICT budget allocations by various spending areas, business functions and product/service categories and how they have changed this year compared to previous years.- With more than 50 charts the report is designed for an executive-level audience, boasting presentation quality.- The report provides insights in a concise format to help executives build proactive and profitable growth strategies.- The report provides an easily digestible market assessment for decision-makers built around research gathered from the local IT decision makers, which enables executives to quickly get up to speed with the current and emerging trends in enterprise ICT investment priorities.Read the full report: https://www.reportlinker.com/p06328167/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

More here:

Retail Enterprise ICT Investment Market Trends by Budget Allocations (Cloud and Digital Transformation), Future Outlook, Key Business Areas and...

Posted in Cloud Computing | Comments Off on Retail Enterprise ICT Investment Market Trends by Budget Allocations (Cloud and Digital Transformation), Future Outlook, Key Business Areas and…

Head in the cloud: how businesses can optimise their cloud usage without unintended consequences – Global Banking And Finance Review

Posted: at 4:58 pm

By Mallory Beaudreau, Customer Portfolio Director EMEA at Apptio

Introduction

Amid inflation and the ever-growing threat of a possible recession looming on the horizon, businesses are likely to pay even closer attention to their cost base. This is particularly true where cloud is concerned. However, businesses need to be mindful of untargeted cost-cutting, which could harm their long-term business health.

Does public cloud still make sense for the average business?

It makes sense to see cloud computing under tight scrutiny: It is one of the most significant costs major organisations incur, with cloud bills that can run into the millions. Cloud spending currently represents approximately 30% of overall IT budgets; and with potentially challenging conditions on the horizon, it is unsurprising that some analyses show that spend has fallen YOY for top public cloud providers, as businesses are slowly but surely looking to tighten the belt. Given its significant expense, and the difficult economic climate, does public cloud still even make sense for the average business?

Public cloud provides unparalleled flexibility and elasticity, which in turn makes way for innovation and the delivery of business-critical services. A common practice to achieve a balance of innovation, cost saving and efficiency was for businesses to explore both on and off premise technology stacks. But now, tighter budget constraints mean that organisations cannot as easily justify a dual approach of on-premise and cloud infrastructure as has often been the norm to experiment with different operating models.

The real question around cloud computing costs therefore is not if it still makes sense clearly it will continue to be a vital element of enterprise innovation. The real question is whether businesses are able to adequately maintain control and visibility as they cut costs in some areas and embrace more cloud services than ever in others. In these challenging times its essential to know where your cloud spend is going and what value it is driving for your business.

What a challenging economic environment could mean for the next five years of business

It is clear that CFOs are considering cutting back on cloud projects to offset the negative effects of inflation. However, while this method may save costs in the short term, this strategy carries its own risks in the long-term. Over-focusing on reducing spend could cause longer term damage to a business competitive ability.

Now more than ever, it is essential that businesses track the value of their transformation projects to understand what is driving long-term business value and ensure that they do not lose competitive advantage as a result of reducing capability in the cloud.

In fact, those businesses who do establish better visibility of their cloud infrastructure across business, finance, and technology teams will be able to make better data-driven decisions and ensure that vital projects are not scaled back. To use an exercise analogy, businesses need to cut fat, not muscle; in cloud, this requires a strong understanding of who is spending what and for what business purposes.

Visibility leads to flexibility, and those that continuously adjust their priorities while continuing to invest in areas for growth will be the ones who will lead the market in five years.

How to establish cloud cost visibility

By understanding cloud cost associated with each application, service and customer, business leaders realise the true value of investment and focus on a balance between cost and quality. That is why frameworks such as FinOps are becoming essential business tools.

An evolving discipline and cultural practice in cloud financial management, FinOps enables enterprises to maximise commercial value by tracking, analysing and planning cloud spend. Working with FinOps in mind, businesses achieve accountability as theyre faced with a clearer structure through which to track the success of their investments.

FinOps helps businesses gain a better perspective on their cloud spend which in turn optimises cross-departmental collaboration. By renegotiating the architecture of cloud spend within a business, leaders can then begin optimising cloud usage, in turn reducing their cloud service bill.

So, at a time where businesses would normally look to cut down on cloud spend, they should instead focus on engaging in frameworks such as FinOps to gain a better view of where the spend is going in order to redirect it more efficiently. By doing so they would be promoting business cloud utilisation efficiency; increasing spend without losing sight of data in the cloud to maximise competitive advantage.

Conclusion

Leaders do not have to choose between cost saving and innovation. There are clearly challenging times ahead, but those leaders who prioritise greater spend visibility and tracking will be the best placed to not only survive but thrive in the next five years. By engaging in frameworks such as FinOps business can help their business grow, optimise development, and retain competitive capability.

The rest is here:

Head in the cloud: how businesses can optimise their cloud usage without unintended consequences - Global Banking And Finance Review

Posted in Cloud Computing | Comments Off on Head in the cloud: how businesses can optimise their cloud usage without unintended consequences – Global Banking And Finance Review

This Week in Coins: Google Cloud, BNY Mellon News Doesn’t Boost Bitcoin, Ethereum – Decrypt

Posted: at 4:58 pm

It was the fourth consecutive week of losses or no movement for Bitcoin and Ethereum, both of which dipped lower at the end of this week after more high inflation readings from the U.S. Bureau of Labor Statistics.

Bitcoin fell 2% over the past week and currently trades for $19,126; Ethereum fell 3.5% to a current price of $1,282, according to CoinGecko data.

On Monday, Bitcoins mining difficulty hit a new all-time high after rising by 14%the largest spike since May. As difficulty increases, miners could face slimmer profits if Bitcoins price stays inert, since more computing power and electricity is needed to mine. However, mining difficulty increases also indicate a strong and growing network.

Ethereums supply turned deflationary last weekend, meaning more ETH is currently being burned (removed from circulation) than created. This comes as no surprise to Ethereum flag-waversit was announced as part of the post-merge processbut the news had little effect on prices this week.

So-called Ethereum Killers (layer-1 blockchains with high-functionality smart contracts) had a tough week, including Cardano (ADA), which is down 14% to $0.36 and Solana (SOL), which fell 9% to $29.91. The Solana network has also faced ongoing stability issues, though Solana founder Anatoly Yakovenko said on Decrypt's gm podcast that a "long-term fix" is coming and that getting a handle on outages is the "number one priority" for Solana.

Uniswap (UNI) tumbled 8% to $6.13 and Chainlink (LINK) also fell 8% and currently trades at $6.95.Ethereum Classic (ETC) and Near Protocol (NEAR) also both tumbled around 16% this week.

Two huge institutional players announced moves into crypto this week: Google Cloud and BNY Mellon.

Googles cloud division on Tuesday announced that it will use Coinbase to accept crypto payments for cloud services early next year. A "handful" of customers will be able to pay in crypto through integration with Coinbase Commerce, a payments tool for businesses. As part of the deal, Coinbase Commerce is expected to move "data-related applications" from Amazon Web Services' cloud to Google's.

Investment banking titan BNY Mellon, one of the oldest U.S. banks in continuous operation, on Tuesday launched a custody service for Bitcoin and Ethereum on behalf of select investment firms using software developed with crypto custody provider Fireblocks. BNY Mellon has tapped Chainalysis for compliance software and will be storing clients' private keys and providing bookkeeping services on their crypto portfolios.

Perhaps a year ago, during a different economic cycle, those two moves would have moved crypto markets. Not this week, in the current macro environment.

European Union lawmakers on Monday voted 28 to 1 to pass the Markets in Crypto Assets Regulation (MiCA)a landmark package of legislation that hopes to regulate crypto within the bloc. If it survives the next round of voting, MiCAs implementation will make stricter demands on crypto companies, stablecoin issuers, and miners.

Back in April last year, the G20 (an affiliation of 20 of the worlds largest economies) tasked the Organization for Economic Co-operation and Development (OECD) to develop a framework providing for the automatic exchange of tax-relevant information on Crypto-Assets between nations. On Monday, the OECD submitted its framework to the G20.

Finance ministers and central bank governors met in Washington later in the week to review the 100-page Crypto Asset Reporting Framework (CARF) and suggested amendments to the groups Common Reporting Standard (CRS).

On Wednesday, Massachusetts Senator Elizabeth Warren and six other U.S. Democrat lawmakers submitted a letter to Pablo Vegas, CEO of the Electric Reliability Council of Texas (ERCOT), calling Texas a deregulated safe harbor for crypto mining operations and requesting information on the energy consumption of Bitcoin mining operations in the state of Texas.

Finally, this week brought yet another rejection from the SEC for a Bitcoin spot ETF (exchange-traded fund, this time from Cboe BZX Exchange.

Stay on top of crypto news, get daily updates in your inbox.

Follow this link:

This Week in Coins: Google Cloud, BNY Mellon News Doesn't Boost Bitcoin, Ethereum - Decrypt

Posted in Cloud Computing | Comments Off on This Week in Coins: Google Cloud, BNY Mellon News Doesn’t Boost Bitcoin, Ethereum – Decrypt

Matt Butcher on Web Assembly as the 3rd Wave of Cloud Computing – InfoQ.com

Posted: October 11, 2022 at 12:19 am

Subscribe on: Apple Podcasts Google Podcasts Soundcloud Spotify Overcast Podcast Feed

Wesley Reisz: Cloud computing can be thought of as two, or as today's guest will discuss, three different waves.

The first wave of cloud computing can be described as virtualization. Along came the VM and we no longer were running on our physical compute. We introduced virtual machines to our apps. We improved density, resiliency, operations. The second wave came along with containers and we built orchestrators like Kubernetes to help manage them. Startup times decreased. We improved isolation between teams, we improved flow, velocity. We embraced DevOps. We also really introduced the network into how our applications operated. We've had to adapt and think about that as we've been building apps, taking all of that into consideration. Many have described Serverless (or functions as a service) as a third wave of cloud compute.

Today's guest, the CEO of Fermyon Technologies, is working on functions as a service delivered via Wasm (Web Assembly), and that will be the topic today's podcast.

Hi, my name is Wes Reisz. I'm a technical principal with ThoughtWorks and cohost of the InfoQ podcast. In addition, I chair a software conference called QCon San Francisco. QCon is a community of senior software engineers focused on sharing practical, no marketing based solutions to real-world engineering problems. If you've search the web for deeply technical topics and ran across videos on InfoQ, odds are you've seen some of the talks I'm referring to about QCon. If you're interested in being a part of QCon and contributing to that conversation, the next one is happening at the end of October in the Bay Area. Check us qconsf.com.

As I mentioned, today our guest is Matt Butcher. Matt is a founding member of dozens of open-source projects, including Helm, Cloud Native Application Bundles, Krustlet, Brigade, Open Application Model Glide, the PHP HTML5 parser and Query Path. He's contributed to over 200 open source projects spanning dozens of programming languages. Today on the podcast we're talking about distributed systems and how Web Assembly can be used to implement functions as a service. Matt, welcome to the podcast.

Matt Butcher: Thanks for having me, Wes.

Wesley Reisz: In that intro, I talked about two waves of cloud compute. You talk about a third, what is the third wave of cloud compute?

Matt Butcher: Yes, and it actually, spending a little time on the first two autobiographically helps articulate why I think there's a third. I got into cloud services really back when OpenStack got started. I had joined HP and joined the HP Cloud group right when they really committed a lot of resources into developing OpenStack, which had a full virtual machine layer and object storage and networking and all of that. I came into it as a Drupal developer, of all things. I was doing content management systems and having a great time, was running the developer CMS system for HP, and as soon as I got my first taste of the virtual machine world, I was just totally hooked because it felt magical.

In the past, up until that time, we really thought about the relationship between a piece of hardware and the operating system as being sort of like one to one. My hardware at any given time can only run one operating system. And I'm one of those people who's been dual booting with Linux since the nineties and suddenly the game changed. And not only that, but I didn't have to stand up a server anymore. I could essentially rent space on somebody else's server and pay their electricity bill to run my application, right?

Wesley Reisz: Yes, it was magic.

Matt Butcher: Yes, magic is exactly the word that it felt like at that time, and I was just hooked and got really into that world and had a great time working on OpenStack. Then along came containers and things changed up for me job wise and I ended up in a different job working on containers. At the time I was trying to wrestle through this inner conflict. Are containers going to defeat virtual machines, or are virtual machines going to defeat containers? And I was, at the time, really myopically looking at these as competitive technologies where one would come out the victor and the other one would fall by the wayside of the history of computing, as we've seen happen so many other times with different technologies.

It took me a while, really all through my Deis days, up until Microsoft acquired Deis, and I got a view of what it looked like inside the sausage factory to realize that no, what we weren't seeing is two competing technologies. We were really seeing two waves of computing happen. The first one was us learning how to virtualize workloads using a VM style, and then containers offered an alternative way with some different pros and some different cons. But when you looked at the Venn diagram of features and benefits and even patterns that we used, there was actually very little overlap between the two, surprisingly little overlap between the two.

I started reconceptualizing the cloud compute world as having this wavy kind of structure. So here we are at Microsoft, the team that used to be Deis, and then we joined Microsoft and we gain new developers from other parts of Microsoft and we start to interact with the functions as a service team, the IoT team, the AKS team, and all of these different groups inside of Azure and get a real look, a very, very eyeopening look for what all of this stuff looks like under the hood and what the real struggles are to run a cloud at scale. I hate using the term at scale, but that's really what it is there. But also we're doing open source and we're engaged with startups and medium-sized companies and large companies, all of whom are trying to build technologies using this stuff, containers, virtual machines, object storage and stuff like that.

We start seeing where both the megacorp and the startups are having a hard time and we're trying to solve this by using containers and using virtual machines. At some point we started to realize, "Hey, there are problems we can't solve with either of these technologies." We can only push the startup time to containers down to a few hundred milliseconds, and that's if you are really packing stuff in and really careful about it. Virtual machine images are always going to be large because you've always got to package the kernel. We started this checklist of things and at some point it became the checklist of what is the next wave of cloud computing?

That's where we got into Web Assembly. We start looking around and saying, "Okay, what technology candidates are there that might fill a new compute niche, where we can pack something together and distribute it onto a cloud platform and have the cloud platform executed?" serverless at the time is getting, and we should come back to serverless later cause it's an enticing topic on its own. serverless is getting popular but wasn't necessarily solving that problem and we wanted to address it more at an infrastructure layer and say, "Is there a third kind of cloud compute?"

And after looking around at a couple of different technologies, we landed on Web Assembly of all things, a browser technology, but what made it good for the browser, that security isolation model, small binary sizes, fast startup times, those are just core things you have to have in a web browser. People aren't going to wait for the application to start. They're not going to tolerate being able to root your system through the browser and so all these security and performance characteristics and multilanguage, multi-architecture characteristics were important for the browser. That list was starting to match up very closely with the list of things that we were looking for in this third wave of cloud computing.

This became our Covid project. We spent our Fridays, what would it mean to try and write a cloud compute layer with Web Assembly? And that became Krustlet, which is a Web Assembly runtime essentially for Kubernetes. We were happy with that, but we started saying, "Happy, yes, but is this the right complete solution? Probably not." And that was about the time we thought, "Okay, it's time to do the startup thing. Based on all the knowledge we've accrued about how Web Assembly works, we're going to start without the presupposition that we need to run inside of a container ecosystem like Kubernetes and we just need to start fresh." And that was really what got us kicking with Fermyon and what got us excited and what got us to create a company around this idea that we can create the right kind of platform that illustrates what we mean by this kind of third wave of cloud computing.

Wesley Reisz: We're talking about Web Assembly to be able to run server side code. Are we talking about a project specifically, like Krustlet's a project, or are we talking about an idea? What is the focus?

Matt Butcher: Oh, that's a great question because as a startup founder, my initial thing is, "Well, we're talking about a project," but actually I think we're really talking more about an ecosystem. There's several ecosystems we could choose from, the Java ecosystem or the dotnet ecosystem as illustrations of this. But I think the Docker ecosystem, it's such a great example of an ecosystem evolving and one that's kind of recent, so we all kind of remember it, but there were some core technologies like Docker of course, and early schedulers including Mesos and Swarm and Fleet and the key value storage systems like ETCD and Consul. So there were a whole bunch of technologies that co-evolved in order to create an ecosystem, but the core of the ecosystem was the container.

And that's what I think we are really in probably the first year or two years of seeing that develop inside of Web Assembly, a number of different companies and individual developers and scholars in academia have all sort of said, "Hey, the Web Assembly binary looks like it might be the right foundation for this. What are the technologies we need to build around it and what's the community structure we need to build around it?" Because standardizing is still the gotcha for almost all of our big efforts. We want things standardized enough so that we can run reliably and understand how things are going to execute and all of that while we all still want to keep enough space open that we can do our own thing and pioneer a little bit.

I think that the answer to your question is the ecosystem is the first thing for this third wave of cloud compute. We need groups like Bytecode Alliance where the focus is on working together to create the specifications like Web Assembly system interface that determines how you interface with a system clock, how you load environment variables, how you read and write files, and we need that as a foundational piece. So there's that in a community.

There's the conferences like Web Assembly Summit and Wasm Day at KubeCon, and we need those as areas where we can collaborate and then we need lots and lots of developers, often working for different companies, that are all trying to solve a set of problems that define the boundaries of the ecosystem. I think we are in about year one and a half to year two of really seeing that flourishing. Bytecode Alliance has been around a little longer, but only formalized about a year and a half ago. You're seeing a whole bunch of startups like Fermyon and Suborbital and Cosmonic and Profion bubbling up, but you're also seeing Fastly and CloudFlare buying into this Microsoft, Amazon, Google buying into this so we're really seeing once again the same replay of a ecosystem formation that we saw in the Docker ecosystem when it was Red Hat at Google.

Wesley Reisz: I know of Fastly doing things at the Edge, being able to compile things at the Edge and be able to run Web Assembly Wasm there. I can write Wasm applications myself and deploy them, but the cloud part, how do I deploy Wasm in a Cloud Native way? How does that work today?

Matt Butcher: In this case, Cloud Native and Edge are similar. Maybe the Edge is a little more constrained in some of the things it can do and a little faster to deliver on others. But at the core of it, we need to be able to push a number of artifacts somewhere and understand how they're going to be executed. We know, for example, we've got the binary, a Web Assembly binary file, and then we need some supporting file. A good example of this is fermyon.com is powered by a CMS that we wrote called Bartholomew. For Bartholomew, we need the Web Assembly binaries that serve out the different parts of the site, and it's created with a microservice architecture. I think it's got at this point five different binary files that execute fermyon.com.

Then we need all of the blog posts and all the files and all the images and all the CSS, some of which are dynamic and some of which are static. And somehow we have to bundle all of these up. This is a great example of where Bytecode Alliance is a great entity to have in a burgeoning ecosystem. We need to have a standard way of pushing these bundles up to a cloud. And Fastly's Compute@Edge is very similar. We need a way to push their artifacts up to Compute@Edge with Fastly or any of these.

There's a working group called SIG Registry that convenes under Bytecode Alliance that's working on defining a package format and defining how we're going to push and pull packages, essentially where you think of in the Docker world, pushing and pulling from registries and packaging things up with a Docker file and creating an image file, same kind of thinking is happening in Bytecode Alliance specific to Web Assembly. SIG Registries is a great place to get involved if that's the kind of thing that people are interested in. You can find out about it at bytecodealliance.org. That's one of the pieces of community building/ecosystem building that we've got to be engaged in.

Wesley Reisz: You started a company, Fermyon, and now what's the mission of Fermyon? Is it to be able to take those artifacts and then be able to deploy them onto a cloud footprint? What is Fermyon doing?

Matt Butcher: For us, we're really excited about the idea that we can create a cloud run time that can run in AWS, in Azure, in Google, in Digital Ocean that can execute these Web Assembly modules and that we can streamline that experience to make it frictionless. It's really kind of a two part thing. We want to make it easy for developers to build these kinds of applications and then make it easy for developers to deploy and then manage these applications over the long term.

When you think about the development cycle, oftentimes as we build these new kinds of systems, we introduce a lot of fairly heavy tooling. Virtual machines are still hard to build for us now even a decade and some into the ecosystem. And technologies like Packer have made it easier, but it's still kind of hard. The number one thing that Docker did amazingly well was create a format that made it easy for people to take their applications that already existed, package them up using a Docker file into a image, and we looked at that and said, "Could we make it simpler? Could we make the developer story easier than that?"

And the cool thing about Web Assembly is that all these languages are adding support into their compilers. So with Rust, you just add --target Wasm32-wasi and it compiles the binary for you. We've really opted for that lightweight tooling.

Spin is our developer tool, and the Spin project is basically designed to assist in what we call the inner loop of development. This is a big microsoft-y term, I think inner and outer loop of development.

Wesley Reisz: Fast compile times.

Matt Butcher: What we really mean is when you as the individual developer are focused on your development cycle and you've blocked out the world and you're just wholly engaged in your code, you're in your inner loop, you're in flow. And so we wanted to build some tools that would help developers when they're in that mode to be able to very quickly and rapidly build Web Assembly based applications without having to think about the deployment time so much and without having to use a lot of external tools. So Spin is really the one tool that we think is useful there, and we've written VS code extension to streamline that.

And then on the cloud side, you got to run it somewhere, and we built the tool we call Fermyon or the Fermyon platform, to really execute there. And that's kind of a conglomeration of a number of open source projects with a nice dashboard on top of it that you can install into Digital Ocean or AWS or Azure or whatever you want and get it running there.

Wesley Reisz: And that runs a full Wasm binary? Earlier I talked functions as a service, does it run functions or does it run full Wasm binaries?

Matt Butcher: And this gets us back into the serverless topic, which we were talking about earlier, and serverless I think has always been a great idea. The core of this is can we make it possible so that the developer doesn't even have to think about what a server is?

Wesley Reisz: Exactly. The plumbing.

Matt Butcher: And functions as a service to me is just about the purest form of serverless that you can get where not only do you not have to think about the hardware or the operating system, but you don't even have to think about the web framework that you're running in, right? You're merely saying, "When a request comes into this endpoint, I'm going to handle it this way and I'm going to serve back this data." Within moments of starting your code, you're deep into the business logic and you're not worried about, "Okay, I'm going to stand up an HTTP server, it's got to listen on this port, here's the SSL configuration."

Wesley Reisz: No Daemon Sets, it's all part of the platform.

Matt Butcher: Yes. And as a developer, that to me is like, "Oh, that's what I want. No thousand lines of YAML config." serverless and functions as a service were looking like very promising models to us. So as we built out Spin, we decided that at least as the first primary model that we wanted to use, we wanted to use that particular model. Spin for example, it functions more like an event listener where you say, "Okay, on an HTTP request, here's the request object, do your thing and send back a response object." Or, "As a Redis listener, when a message comes in on this channel, here's the message, do your thing and then optionally send something back." And that model really is much closer to Azure functions and Lambda and technologies like that. We picked that because developers seem to really enjoy that. Developers say they really enjoy that model. We think it's a great compliment for Web Assembly. It really gets you thinking about writing microservices in terms of very, very small chunks of code and not in terms of HTTP servers that happen to have microservice infrastructure built in.

Wesley Reisz: Spin lets you write this inner loop, fast flow, event driven model where you can respond to the events that are going like the serverless model, and then you're able to package that into Wasm that can then be deployed with Fermyon cloud? Is that the idea?

Matt Butcher: Yes, and when you think about writing a typical HTTP application, even going back to say Rails, Rails and Django I think really defined how we think about HTTP applications, and you have got this concept of the routing table. And in the routing table you say, "When somebody hits /foo, then that executes myFoo module. If I hit /bar that executes myBar module." That's really the direction that we went with the programming model where when you hit fermyon.com/index, it executes the Web Assembly module that generates the index file and serves that out. When you hit /static/file.jpeg, it loads the file server and serves it back. And I think that model really kind of resonates with pretty much all modern web application and microservice developers, but all the writing in the back end is just a function. I really like that model because it just feels like you're getting right to the meat of what you actually care about within a moment of starting your application instead of a half hour or an hour later when you've written out all the scaffolding for it.

Wesley Reisz: What about State? You mentioned Redis before having Redis listeners, how do you manage State when you're working with Spin or with Fermyon cloud? How does that come into play?

Matt Butcher: That's a great architectural discussion for microservices as a whole, and we really have felt strongly that what we have observed coming from Deis and Microsoft and then on into Fermyon or Google, in the case of some of the other engineers who work on Fermyon, Google into Fermyon, we've seen the microservice pattern be successful repeatedly. And Statelessness has been a big virtue of the microservice model as far as the binary keeping state internally, but you got to put state full information somewhere.

Wesley Reisz: At some point.

Matt Butcher: The easy one is, "Well, you can put it in files," and WASI and Web Assembly introduced file support two years ago and that was good, but that's not really where you want to stop. With Spin, we began experimenting with adding some additional ones like Redis support and generic key-value storage, which is coming out and released very soon. Database support is coming out really soon and those kinds of things. Spin, by the way, is open source, so you can actually go see all these PRs in flight as we work on PostgreSQL support and stuff like that.

It's coming along and the strategy we want to use is the same strategy that you used in Docker containers and other stateless microservice architectures where State gets persisted in the right kind of data storage for whatever you're working on, be that a caching service or a relational database or a noSQL database. We are hoping that as the Web Assembly component model and other similar standards kind of solidify, we're going to see this kind of stuff not be a Spin specific feature, but just the way that Web Assembly as a whole works and different people using different architectures will be able to pull the same kinds of components and get the same kind of feature set.

Wesley Reisz: Yes, very cool. When we were talking just before we started recording, you mentioned that you wanted to talk a little bit about performance of Web Assembly and how it's changed. I remember I guess a year ago, maybe two years ago, I did a podcast with Linn Clark. We were talking about Fastly and running Web Assembly at the Edge, like we were talking about before, and if I remember right, I may be wrong, but if I remember right, it was like 3 ms was the overhead that was for the inline request compiled time, which I thought was impressive, but you said you're way lower than that now. What is the request level inline performance time of Web Assembly these days?

Matt Butcher: We're lower now. Fastly's lower now. As an eco, we've learned a lot in the last couple years about how to optimize and how to pre initialize and cache things ahead of time. 3ms even a year and a half ago would've been a very good startup time. Then we are pushing down toward a millisecond and now we are sub one millisecond.

And so again, let's characterize this in terms of these three waves of cloud computing, a virtual machine, which is a powerhouse. You start with the kernel and you've got the file system and you've got all the process table and everything starting up and initializing and then opening sockets and everything, that takes minutes to do. Then you get to containers. And containers on average take a dozen seconds to start up. You can push down into the low seconds range and if you get really aggressive and you're really not doing very much, you might be able to get into the hundred milliseconds or the several hundred milliseconds range.

One of the core features that we think this third wave of cloud compute needed, and one of our criteria coming in was it's got to be in the tens of milliseconds. That was a design goal coming out of the gate for us, and the fact that now we're seeing that push down below the millisecond marker for being able to get from cold State VM to something executing, to that first instruction, having that under a millisecond is just phenomenal.

In many ways we've kind of learned lessons from the JVM and the CLR and lots and lots of other research that's been done in this area. And in another, some of it just comes about because with both us and with Fastly and other cloud providers distinctly from the browser scenario, we can preload code, compile it ahead of time into Native and then be able to have it cached there and ready to go because we know everything we need to know about what the architecture and what the system is going to look like when that first invocation hits, and that's why we can really start to drive times way, way down.

Occasionally you'll see a blog post of somebody saying, "Well, Web Assembly wasn't terribly fast when I ran it in the browser." And then those of us on the cloud side are saying, "Well, we can just make it blazingly fast." A lot of that difference is because the things that the run time has to be able to learn about the system at execution time in the browser, we know way ahead of time on the cloud and so we can optimize for that. I wouldn't be surprised to see Fastly, Fermyon, other companies pushing even lower until it really does start to appear to be at Native or faster than Native speeds.

Wesley Reisz: That's awesome. Again, I haven't really tracked Web Assembly in the last year and a half or so, but some of the other challenges were types and I think component approach to where you could share things. How has that advanced over the last year and a half? What's the state of that today?

Matt Butcher: Specifications often move in fits and starts, right? And W3C, by the way, the same standards body that does CSS, HTML and HTTP, this is the same standards body that works on Web Assembly. Types was one of the initial, 'How do we share type information?" And that morphed in and out of several other models. And ultimately what's emerged out of that is borrowing heavily from existing academic work on components. Web Assembly is now gaining a component model. What that means in practice is that when I compile a Web Assembly module, I can also build a file that says, "These are my exported functions and this is what they do and these are the types that they use." And types here aren't just like instant floats and strings. We can build up very elaborate struct like types where we say, "This is a shopping cart and a shopping cart has a count of items and an item looks like this."

And the component model for Web Assembly can articulate what those look like, but it also can do a couple of other really cool things. This is where I think we're going to see Web Assembly really break out. Developers will be able to do things in Web Assembly that they have not yet been able to do using other popular architectures, other popular paradigms. And this is that Web Assembly can articulate, "Okay, so when this module starts up, it needs to have something that looks like a key value storage. Here's the interface that defines it. I need to be able to put a string string and I need to be able to get string and get back a string object or I need a cache where it lives for X amount of time or else I get a cache miss." But it has no real strong feelings about, it doesn't have any feelings at all. It's binary, it has no real strong...

Wesley Reisz: Not yet. Give a time.

Matt Butcher: Anthropomorphizing code.

And then at startup time we can articulate, Fastly can say, "Well, we've got a cache-like thing and it'll handle these requests." And Fermyon can say, "Well we don't, but we can load a Docker container that has those cache-like characteristics and expose a driver through that." And suddenly applications can be sort of built up based on what's available in the environment. Now because Web Assembly is multi-language, what this means is effectively for the most part, we've been writing the same tools over and over again in JavaScript and Ruby and Python and Java. If we can compile all of the same binary format and we can expose the imports and exports for each thing, then suddenly language doesn't make so much of a difference. And so whereas in the past we've had to say, "Okay, here's what you can do in JavaScript and here's what you can do in Python," now we can say, "Well, here's what you can do."

Wesley Reisz: Reuse components.

Matt Butcher: And whether the key value store is written in Rust or C or Erlang or whatever, as long as it's compiled to Web Assembly, my JavaScript application can use it and my Python app can use it. And that's where I think we should see a big difference in the way we can start constructing applications by aggregating binaries instead of fetching a library and building it into our application.

Wesley Reisz: Yes, it's cool. Speaking of, language support was another thing that you wanted to talk about. There's a lot of changes, momentum and things that have been happening with languages themselves and support of Web Assembly like Switches, there's things with Node, we talked about Blazer for a minute. What's happening in the language space when it comes to Web Assembly?

Matt Butcher: To us, Web Assembly will not be a real viable technology until there is really good language support. On fermyon.com we actually track the status of the top 20 languages as determined by Red Monk and we watch very closely and we continually update our matrix of what the status is of Web Assembly in these languages. Rewind again back only a year or two and all the check boxes that are checked are basically C and Rust, right? Both great languages, both well-respected languages, both not usually the first languages a developer says, "Yes, this is my go-to language." Rust is gaining popularity of course, and we love Rust, but JavaScript wasn't on there. Python wasn't on there, Ruby wasn't on there. Java and C Sharp certainly weren't on there. What we've seen over only a year, year and a half is just language after language first announcing support and then rapidly delivering on it.

Earlier this year, I was ecstatic when I saw in just the space of two weeks, Ruby and Python both announce that the CRuby and CPython run times were compilable to Web Assembly with WASI, which effectively meant all of a sudden Spin, which applications were kind of limited to Rust and C at the time, could suddenly do Python and Ruby applications. Go, the core project is a little bit behind on Web Assembly support, but the community picked up the slack and Tiny Go can compile Go programs into Web Assembly Plus WASI. Go came along right around, actually a little bit earlier than Python and Ruby, but now what we're seeing, now being in the last couple of weeks, is the beginning of movement from the big enterprise languages. Microsoft has been putting a lot of work into Web Assembly in the browser over the past with the Blazer framework, which essentially ran by compiling the CLR, the run time for C Sharp in those languages into Web Assembly and then interpreting the DLLs.

But what they've been saying is that was just the first step, right? The better way to do it is to compile C#, F#, all the CLR supported languages directly into Web Assembly and be able to run them directly inside of a Web Assembly runtime, which means big performance boost, much smaller binary sizes and all of a sudden it's easy to start adding support for newly emerging specifications because it doesn't have to get routed through multiple layers of indirection.

Steve Sanderson, who's one of the lead, I think he's the lead PM for the dotnet framework, has been showing off a couple times since KubeCon in Valencia, now I think four or five different places has shown off where they are in supporting dotnet to Web Assembly with WASI, and it's astounding. So often we've thought of languages like C# as being sort of reactive, looking around at what's happening elsewhere and reacting, but they're not. They are very forward thinking engineers, and David Fowler's brilliant and the stuff they're doing is awesome. Now they've earmarked Web Assembly as the future, as one of the things they really want to focus on. And I'm really excited, my understanding is the next version of dotnet will have full support for compiling to Native Web Assembly and the working drafts of Native's out now.

Wesley Reisz: Yes, that's awesome. You mentioned that there's work happening with Java as well, so Java, the CLR, that's amazing.

Matt Butcher: Yep. Kotlin too is also working on a Native implementation. I think we'll see Java, Kotlin, the dotnet languages all coming. I think they'll be coming by the end of the year. I'm optimistic. I have to be because I'm a startup founder and if you're not optimistic, you won't survive. But I think they'll be coming by the end of the year. I think you'll really start to see the top 20 languages, I think we'll see probably 15 plus of them support Web Assembly by the end of the year.

Wesley Reisz: That's awesome. Let's come back for a second to Fermyon. We're going to wrap up here, but I wanted you to walk through, there's an app that you talk about, Wagi, that's on one of your blog posts, that's how you might go about using Spin, how you use Fermyon cloud. Could you walk through what it looks like to bootstrap an app? Talk about just what does it look like for me if I wanted to go use Fermyon cloud, what would it look like?

Matt Butcher: Spin's the tool you'd use there? Wagi is actually just a description of how to write an application, so when you're writing it. Think about Wagi as one of You download Spin from our GitHub repository and you type in Spin new and then the type of application you want to write and the name. Say I want to create Hello World in Rust, it's Spin New Rust Hello World. And that commands scaffolds out, it runs the cargo commands in the background and creates your whole application environment. When you open it from there, it's going to look like your regular old Rust application. The only thing that's really happening behind the scenes is wiring up all the pieces for the component model and for the compiler so that you don't have to think about that.

spin new, you've got your Hello World app created instantly. You can edit it however you'd normally edit, I use VS code. From there, you type in Spin Build, it'll build your binary for you. And again, largely it's invoking the Rust compiler in Rusts case or the Tiny Go compiler in Go Case or whatever. And then Spin Deploy will push it out to Fermyon. So assuming you've got a Fermyon instance running somewhere, you can Spin Deploy and have it pushed out there. If you're doing your local development, you can just, instead of typing, Spin Deploy, you can type Spin Up and it'll create you a local web server and be running your application inside there so the local development story is super easy there. In total, we say you should be able to get your first Spin application up and running in two minutes or less.

Wesley Reisz: How do you target different endpoints for when you deploy out to the cloud? Or do you not worry about it? That's what you pay Fermyon, for example.

Matt Butcher: Yes, you're building your routing table as you build the application. There's a toml file in there called Spin.toml where you say, "Okay, if they hit slash then they load this module. If they hit /fu, they hit that module," and it supports all the normal things that routing tables support. But from there, when you pushed out to the Fermyon platform, the platform will provision your SSL certificate, set up a domain name for you. The Fermyon dashboard that comes as part of that platform will allow you to set up environment variables and things like that. So as the developer, you're really just thinking merely in terms of how you build your binary and what you want to do. And then once you deploy it, then you can log into the Fermyon dashboard and start tweaking and doing the DevOps side of what we would call the outer loop of development.

Wesley Reisz: What's next for Fermyon?

Matt Butcher: We are working on our software as a service because again, our goal is to make it possible for anybody to be able to run Spin applications and get them up and running in two minutes or less, even when that means deploying them out somewhere where they've got a public address. So while right now if you want to run Fermyon, you got to go install it in your AWS cluster, your Google Cloud cluster, whatever. As we unroll this service later on this year, it should make it possible for you to get that started just by typing Spin Deploy, and have that up and running inside of Fermyon.

Wesley Reisz: Well, very cool. Well, Matt, thank you for, thanks for the time to catch up and help us further understand what's happening in the Wasm community and telling us about Fermyon and Fermyon cloud.

Matt Butcher: Thanks so much for having me.

.From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Here is the original post:

Matt Butcher on Web Assembly as the 3rd Wave of Cloud Computing - InfoQ.com

Posted in Cloud Computing | Comments Off on Matt Butcher on Web Assembly as the 3rd Wave of Cloud Computing – InfoQ.com

Beeks bets on further growth as financial firms flock to cloud computing – Yahoo News UK

Posted: at 12:19 am

Beeks bets on further growth as financial firms flock to cloud computing

Cloud computing and connectivity provider Beeks Financial has signed two significant multi-year contracts that the company says will underpin revenue growth going forward.

The contracts, which are covered by non-disclosure agreements, are with global asset management firms and are expected to be worth a total of 1.8 million over three years. The announcement came as Glasgow-based Beeks posted an increase in turnover and underlying earnings for the year to June 30.

AIM-listed Beeks supplies technology that speeds up online trading in financial products, and also operates an international network of data centres. Its Proximity Cloud is a single-use platform that banks and brokers can use without being hosted at a Beeks data centre, while Exchange Cloud is designed specifically for global financial exchanges and electronic communication networks.

The majority of financial services organisations around the world are exploring how to utilise the power of the cloud to support their ambitions, chief executive Gordon McArthur said. This presents us with a considerable opportunity and through our Private Cloud, Proximity Cloud and Exchange Cloud, we have the offering to address it.

READ MORE:Beeks on cloud nine with new products securing record sales

He added that the company will continue to invest in expansion following a 15m fundraising in April, of which more than 10m gross remains.

Going for a higher amount allowed us to frontload capacity, so youll see weve got a couple of million pounds worth of stock sitting on the balance sheet which will allow us to deliver this years number, and then again as some of these [new] deals land we will keep growing the investment both in product and infrastructure, Mr McArthur added.

The stockpile includes servers, networking gear and other IT equipment that has minimised the impact of supply chain disruptions on Beeks customers. From its new headquarters in Braehead, the company has also increased staffing levels to approximately 100 employees.

Story continues

Beeks posted a 57 per cent increase in revenues which rose to 18.3m during the year to June, Underlying earnings were 52% higher at 6.3m.

After taking account of approximately 2m in deferred earn-out payments for the April 2020 acquisition of network monitoring specialist Velocimetrics, pre-tax profits fell to 660,000 from 1.25m the previous year.

Shares in Beeks closed 9.5p lower yesterday at 145.5p.

Visit link:

Beeks bets on further growth as financial firms flock to cloud computing - Yahoo News UK

Posted in Cloud Computing | Comments Off on Beeks bets on further growth as financial firms flock to cloud computing – Yahoo News UK

The future of automotive computing: Cloud and edge – McKinsey

Posted: at 12:19 am

As the connected-car ecosystem evolves, it will affect multiple value chains, including those for automotive, telecommunications, software, and semiconductors. In this report, we explore some of the most important changes transforming the sector, especially the opportunities that may arise from the growth of 5G and edge computing. We also examine the value that semiconductor companies might capture in the years ahead if they are willing to take a new look at their products, capabilities, organizational and operational capabilities, and their go-to-market approaches.

Four well-known technology trends have emerged as key drivers of innovation in the automotive industry: autonomous driving, connectivity, electrification, and shared mobilitysuch as car-sharing services (Exhibit 1). Collectively, these are referred to as the ACES trends, and they will have a significant impact on computing and mobile-network requirements. Autonomous driving may have the greatest effect, since it necessitates higher onboard-computing power to analyze massive amounts of sensor data in real time. Other autonomous technologies, over-the-air (OTA) updates, and integration of third-party services will also require high-performance and intelligent connectivity within and outside of the car. Similarly, increasingly stringent vehicle safety requirements require faster, more reliable mobile networks with very low latencies.

Exhibit 1

With ACES functions, industry players now have three main choices for workload location: onboard the vehicle, cloud, and edge (Exhibit 2).

Exhibit 2

To ensure that use cases meet the thresholds for technical feasibility, companies must decide where and how to balance workloads across the available computing resources (Exhibit 3). This could allow use cases to meet increasingly strict safety requirements and deliver a better user experience. Multiple factors may need to be considered for balancing workloads across onboard, edge, and cloud computing, but four may be particularly important. The first is safety, since workloads essential for passenger safety require extremely fast reaction times. Other considerations include latency, computing complexity, and requirements for data transfer, which depend on the type, volume, and heterogeneity of data.

Exhibit 3

Connected-car use cases today typically rely on either onboard computing or the cloud to process their workloads. For example, navigation systems can tolerate relatively high latency and may function better in the cloud. OTA updates are typically delivered via a cloud data center and downloaded via Wi-Fi when it is least disruptive, and infotainment content originates in the cloud and is buffered onboard to give users a better experience. By contrast, accident prevention workloads such as autonomous emergency-braking systems (AEBS) require very low latency and high levels of computing capability, which, today, may mean that they are best processed onboard the vehicle.

Advances in computing and connectivity are expected to enable many new and advanced automotive use cases.

Advances in computing and connectivity are expected to enable many new and advanced use cases (Exhibit 4). These developments could alter where workloads are located. Of particular significance, the rollout of 5G mobile networkscould allow more edge processing. Given the importance of these interrelated technologies, we explored their characteristics in detail, focusing on automotive applications.

Exhibit 4

5G technology is expected to provide the bandwidth, low latency, reliability, and distributed capabilities that better address the needs of connected-car use cases. Its benefits to automotive applications fall into three main buckets:

These benefits could contribute to greater use of edge applications within the automotive sector. Workloads that are not safety-criticalinfotainment and smart traffic management, for examplecould start to shift to the edge from onboard or in the cloud. Eventually, 5G connectivity could reduce latency to the point that certain safety-critical functions could begin to be augmented by the edge infrastructure, rather than relying solely on onboard systems.

Most current automotive applications today tend to rely exclusively on one workload location. In the future, they may use some combination of edge computing with onboard or cloud processing that delivers higher performance. For instance, smart traffic management systems may improve onboard decision making by augmenting the vehicles sensor data with external data (for example, other vehicles telemetry data, real-time traffic monitoring, maps, and camera images). Data could be stored in multiple locations and then fused by the traffic management software. The final safety-related decision will be made onboard the vehicle. Ultimately, large amounts of real-time and non-real-time data may need to be managed across vehicles, the edge infrastructure, and the cloud to enable advanced use cases. In consequence, data exchanges between the edge and the cloud must be seamless.

The evolving automotive value chainwill open many new opportunities for those within the industry and external technology players. The total value created by connected-car use cases could reach more than $550billion by 2030, up from about $64 billion in 2020 (Exhibit 5).

Exhibit 5

Increased connectivity opens up opportunities for players across the automotive value chain to improve their operations and customer services. Take predictive maintenance in cars as an example. Aftermarket maintenance and repair provision now predominantly involve following a fixed interval maintenance schedule or reactive maintenance/repair. There is little visibility around the volume of vehicles that need to be serviced in a particular period, leading to inefficiencies in service scheduling, replacement parts ordering, and inventory, among others. Predictive maintenance using remote car diagnostics could improve the process by giving OEMs and dealers an opportunity to initiate and manage the maintenance process.

The pace of rollout of advanced connected-car use casesis highly contingent on the availability of 5G and edge computing. A variety of factors are converging to accelerate this. Demand is rising for these critical enablers, fueled by a proliferation of consumer and industry use cases. In the short term, value may be generated through enhancements to services already available with 4G, including navigation and routing, smart parking, centralized and adaptive traffic control, and monitoring of drivers, passengers, or packages.

We expect that greater 5G and edge availability may expand the list of viable use cases (technically and financially), boosting edge value exponentially. Looking to 2030, about 30 percent of our value estimate may be enabled by 5G and edge (from 5percent in 2020), largely consistent with our cross-sectoral report on advanced connectivity.

Value creation could be accelerated by traditional players moving into adjacencies and by new entrants from industries not traditionally in the automotive value chain, such as communication system providers (CSPs), hyperscalers, and software developers. Players such as Intel, Nvidia, and the Taiwan Semiconductor Manufacturing Company are adding automotive-softwarecapabilities, leading to greater synergies and vertical-integration benefits. In addition to accelerating value creation, new entrants may compete for a greater share of the total value.

Automotive-hardware value chains are expected to diverge based on the type of OEM. Traditional auto manufacturers, along with their value chains, are expected to see a continuation of well-established hardware development roles based on existing capabilities. Automobiles, components, devices, and chips for applications ranging from cars to the cloud may continue to be primarily manufactured by the companies that specializein them. Nontraditional or up-and-coming automotive players could codevelop vehicle platforms with the established car OEMs and use OEMs services or contract manufacturers such as Magna Steyr for the traditional portions of the value chain.

Established players may seek to increase their share by expanding their core businesses, moving up the technology stack, or by growing their value chain footprints. For instance, it is within the core business of semiconductor players to create advanced chipsets for automotive OEMs, but they could also capture additional value by providing onboard and edge software systems or by offering software-centric solutions to automotive OEMs. Similarly, to capture additional value, hyperscalers could create end-user services, such as infotainment apps for automotive OEMs or software platforms for contract manufacturers.

As players make strategic moves to improve their position in the market, we can expect two types of player ecosystems to form. In a closed ecosystem, membership is restricted and proprietary standards may be defined by a single player, as is the case with Volkswagen, or by a group of OEMs. Open ecosystems, which any company can join, generally espouse a democratized set of global standards and an evolution toward a common technology stack. In extreme exampleswhere common interfaces and a truly open standard existeach player may stay in its lane and focus on its core competencies.

Hybrid ecosystems will also exist. Players following this model are expected to use a mix of open and closed elements on a system-by-system basis. For example, this might be applied to systems in which OEMs and suppliers of a value chain have particular expertise or core competency.

Exhibit 6 describes the advantages and disadvantages of each ecosystem model.

Exhibit 6

Companies in the emerging connected-car value chain develop offerings for five domains: roads and physical infrastructure, vehicles, network, edge, and cloud. For each domain, companies can provide software services, software platforms, or hardware (Exhibit 7).

Exhibit 7

As automotive connectivity advances, we expect a decoupling of hardware and software. This means that hardware and software can develop independently, and each has its own timeline and life cycle. This trend may encourage OEMs and suppliers to define technology standards jointly and could hasten innovation cycles and time to market. Large multinational semiconductor companies have shown that development time can be reduced by up to 40 percent through decoupling and parallelization of hardware and software development. Furthermore, the target architecture that supports this decoupling features a strong middleware layer, providing another opportunity for value creation in the semiconductor sector. This middleware layer may likely be composed of at least two interlinked domain operating systems that may handle the decoupling for their respective domains. Decoupling hardware and software, which is a key aspect of innovation in automotive, tilts the ability to differentiate offerings heavily in favor of software.

New opportunities. In the software layer, companies could obtain value in several different ways. With open ecosystems, participants will have broadly adopted interoperability standards with relatively common interfaces. In such cases, companies may remain within their traditional domains. For instance, semiconductor players may focus on producing chipsets for specific customers across the domains and stack layers, OEMs concentrate on car systems, and CSPs specialize in the connectivity layer and perhaps edge infrastructure. Similarly, hyperscalers may capture value in cloud/edge services.

In closed ecosystems, by contrast, companies may define proprietary standards and interfaces to ensure high levels of interoperability with the technologies of their members. For example, OEMs in a closed ecosystem may develop analytics, visualization capabilities, and edge or cloud applications exclusively for their own use, in addition to creating software services and platforms for vehicles. Sources of differentiation for vehicles could include infotainment features with plug-and-play capabilities, autonomous capabilities such as sensor fusion algorithms, and safety features.

While software is a key enabler for innovation, it introduces vulnerabilities that can have costly implications for OEMs, making cybersecurity a priority (see sidebar, The importance of cybersecurity, for more information). Combined, the 5G and edge infrastructure could potentially offer increased flexibility to manage security events related to prevention and response.

Hardware players could leverage their expertise to offer advanced software platforms and services. Nvidia, for instance, has entered the market for advanced driver-assistance systems (ADAS) and is complementing its system-on-a-chip AI design capabilities with a vast range of software offerings that cover the whole automated-driving stackfrom OS and middleware to perceptionand trajectory planning.

Some companies are also moving into different stack layers. Take Huawei, which has traditionally been a network equipment provider and producer of consumer-grade electrical and electronic (E&E) equipment, and manufacturer of infrastructure for the edge and cloud. Currently, the company is targeting various vehicle stack layers, including the base vehicle operating systems, E&E hardware, automotive-specific E&E, and software and EV platforms. In the future, Huawei may develop vehicles, monitoring sensors, humanmachine interfaces, application layers, and software services and platforms for the edge and cloud domains.

Greater automotive connectivity will present semiconductor players and other companies along the automotive value chain with numerous opportunities. In all segments, they may benefit from becoming solution providers, rather than keeping a narrower focus on software, hardware, or other components. As they move ahead and attempt to capture value, companies may benefit from reexamining elements of their core strategy, including their capabilities and product portfolio.

The automotive semiconductor market is one of the most promising subsegments of the global semiconductor industry, along with the Internet of Things and data centers. Semiconductor companies that transform themselves from hardware players to solution providers may find it easier to differentiate their business from the competitions. For instance, they might win customers by developing application software optimized for their system architecture. Semiconductor companies could also find emerging opportunities in the orchestration layer, which may allow them to balance workloads between onboard, cloud, and edge computing.

As semiconductor companies review their current product offerings, they may find that they can expand their software presence and produce more purpose-specific chipssuch as microcontrollers for advanced driver-assistance, smart cockpit, and power-control systemsat scale by leveraging their experience in the automotive industry and in edge and cloud computing. Beyond software, semiconductor companies might find multiple opportunities, including those related to more advanced nodes with higher computing power and chipsets with higher efficiency.

Semiconductor companies can capitalize on their edge and cloud capabilities by building strategic partnerships with hyperscalers and edge players that have a strong focus on automotive use cases.

To improve their capabilities related to purpose-specific chips, semiconductor players would benefit from a better understanding of the needs of OEMs and consumers, as well the new requirements for specialized silicon. Semiconductor companies can capitalize on their edge and cloud capabilities by building strategic partnerships with hyperscalers and edge players that have a strong focus on automotive use cases.

Tier 1 suppliers could consider concentrating on capabilities that may allow them to become tier 0.5 system integrators with higher stack control points. In another big shift, they could leverage existing capabilities and assets to develop operating systems, ADAS, autonomous driving, and human-machine-interface software for new cars.

To produce the emerging offerings in the automotive-computing ecosystem, tier 1 players might consider recruiting full-stack employees who see the bigger picture and can design products better tuned to end-user expectations. They might also want to think about focusing on low-cost countries and high-volume growth markets with price-differentiated, customized, or lower-specification offerings that have already been tested in high-cost economies.

OEMs could take advantage of 5G and edge disruption by orienting business and partnership models toward as-a-service solutions. They could also leverage their existing assets and capabilities to build closed- or open-ecosystem applications, or focus on high-quality contract manufacturing. Key OEM high growth offerings could include as-a-service models pertaining to mobility, shared mobility, and batteries. OEMs, when seeking partnerships with other new and existing value chain players, need to keep two major things in mind: filling talent and capability gaps (for instance, in chip development) and effectively managing diverse portfolios.

CSPs must keep network investments in lockstep with developments in the automotive value chain to ensure sufficient 5G/edge service availability. To this end, they may need to form partnerships with automotive OEMs or hyperscalers that are entering the space. For best results, CSPs will ensure that their core connectivity assets can meet vehicle-to-everything (V2X) use case requirements and create a road map to support highly autonomous driving. Connectivity alone represents a small part of the overall value to CSPs, however, and companies will benefit from expanding their product portfolios to include edge-based infrastructure-as-a-service and platform-as-a-service. Evolving beyond the traditional connectivity core may necessitate organizational structures and operating models that support more agile working environments.

Hyperscalers could gain ground by moving quickly to partner with various value chain players to test and verify priority use cases across domains. They could also form partnerships with industry players to drive automotive-specific standards in their core cloud and emerging edge segment. To determine their full range of potential opportunitiesas well as the most attractive oneshyperscalers should first analyze their existing assets and capabilities, such as their existing cloud infrastructure and services. They would also benefit from aligning their cloud and edge product portfolios or by extending cloud-availability zones to cover leading locations for V2X use case rollouts and real-world testing. If hyperscalers want to increase the footprint of their cloud and edge offerings within the automotive value chain, they could consider a range of partnerships, such as those with OEMs to test and verify use cases.

The benefits of 5G and edge computing are real and fast approaching, but no single player can go it alone. There are opportunities already at scale today that are not clearly addressed in the technological road map of many automotive companies, and not everybody is capturing them.

Building partnerships and ecosystems for bringing a connected car to market and capturing value are crucial, and some semiconductor companies are already forging strong relationships with OEMs and others along the value chain. The ACES trends in the automotive industry are moving fast; semiconductor companies mustmove quickly to identify opportunities and refine their existing strategies. These efforts will not only help their bottom lines but also could also allow tier 1s and OEMs to shorten the time-to-market for their products and services, which would accelerate the adoption of smart vehiclesand that benefits everyone.

See the rest here:

The future of automotive computing: Cloud and edge - McKinsey

Posted in Cloud Computing | Comments Off on The future of automotive computing: Cloud and edge – McKinsey

Page 16«..10..15161718..3040..»