{"id":1120351,"date":"2023-12-25T06:33:09","date_gmt":"2023-12-25T11:33:09","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/year-in-review-2023-was-a-turning-point-for-microservices-the-new-stack\/"},"modified":"2023-12-25T06:33:09","modified_gmt":"2023-12-25T11:33:09","slug":"year-in-review-2023-was-a-turning-point-for-microservices-the-new-stack","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/cloud-computing\/year-in-review-2023-was-a-turning-point-for-microservices-the-new-stack\/","title":{"rendered":"Year-in-Review: 2023 Was a Turning Point for Microservices &#8211; The New Stack"},"content":{"rendered":"<p><p>    Maybe we are doing microservices all wrong?  <\/p>\n<p>    This was the main thesis of Towards Modern Development of Cloud    Applications (PDF), a paper from a bunch of    Googlers (led by Google software engineer Michael Whittaker) that was    presented in June at HOTOS 23:    Proceedings of the 19th Workshop on Hot Topics in Operating    Systems.  <\/p>\n<p>    The problem, as Whittaker et al pointed out, was that microservices largely have not been set    up correctly, architecturally speaking. They conflate logical    boundaries (how code is written) with physical boundaries (how    code is deployed). And this is where the issues start.  <\/p>\n<p>    Instead, the Google engineers suggested another approach. Build    the applications as logical monoliths    but hand them off to automated runtimes, which makes decisions    on where to run workloads, based on what is needed by the    applications and what is available.  <\/p>\n<p>    With this latency, they were able to lower latency systems by    15x and cost by up to 9x.  <\/p>\n<p>    If people would just start with organized modular code, we can    make the deployment architecture an implementation detail,    Kelsey Hightower commented on this work in October.  <\/p>\n<p>    A few months earlier, the engineering team at Amazon Prime    Video posted a blog post    explaining that, at least in the case of video monitoring, a    monolithic architecture has produced superior performance than    amicroservices    and serverless-led approach.  <\/p>\n<p>    In fact, Amazon saved 90% in operational costs by moving    off a microservices architecture.  <\/p>\n<p>    For a generation of engineers and architects raised on the    superiority of microservices, the assertion is    shocking indeed.  <\/p>\n<p>    This post is an absolute embarrassment for Amazon as a    company. Complete inability to build internal alignment or    coordinated communications,wroteanalystDonnie Berkholz,    who recently started his own industry-analyst    firmPlatify.  <\/p>\n<p>    What makes this story unique is that Amazon was the original    poster child for service-oriented architectures, weighed in    Ruby-on-Rails creator and Basecamp co-founderDavid Heinemeier    Hansson. Now the real-world results of all this theory are    finally in, and its clear that in practice, microservices pose    perhaps the biggest siren song for needlessly complicating your    system. And serverless only makes it worse.  <\/p>\n<p>      The original Amazon video delivery system.    <\/p>\n<p>    The task of Amazon engineers was to monitor the thousands of    video streams that Prime delivered to customers. Originally    this work was done by a set of distributed components    orchestrated by AWS Step Functions, a serverless    orchestration service, AWS Lambda serverless service.  <\/p>\n<p>    In theory, the use of serverless would allow the team to    scale each service independently. It    turned out, however, that at least for how the team implemented    the components, they hit a hard scaling limit at only 5% of the    expected load. The costs of scaling up to monitor thousands of    video streams would also be unduly expensive, due to the need    to send data across multiple components.  <\/p>\n<p>    Initially, the team tried to optimize individual components,    but this did not bring about significant improvements.    So,the team moved all the components into a single    process, hosting them on Amazon Elastic Compute Cloud (Amazon    EC2) and Amazon Elastic Container Service (Amazon ECS).  <\/p>\n<p>    Microservices and serverless components are tools that do work    at high scale, but whether to use them over monolith has to be    made on a case-by-case basis, the Amazon team concluded.  <\/p>\n<p>    Arguably, the term microservices was coined by Peter Rodgers    in 2005, though he called it micro web services. He gave a    name to the idea that many were thinking though, especially in    the age of web services and service-oriented architecture (SOA)    gaining attraction at the time.  <\/p>\n<p>    The main driver behind micro web services at the time was to    break up single large monolithic designs into multiple    independent components\/processes, thereby making the codebase    more granular and manageable, explained software engineer    Amanda Bennett in a blog post.  <\/p>\n<p>    The concept took hold, especially with cloud native computing,    over the following decades, and has only started receiving    criticism in some quarters.  <\/p>\n<p>      Software engineer Alexander Kainz contributed to TNS a      great comparison on monoliths and microservices.    <\/p>\n<p>    In their paper, the Google engineers list a number of    shortcomings with the microservices approach, including:  <\/p>\n<p>    When The New Stack first covered the Amazon news, many quickly    pointed out to us that the architecture the video folks used    was not exactly a monolithic architecture either.  <\/p>\n<\/p>\n<p>    This definitely isnt a microservices-to-monolith story,    remarkedAdrian Cockcroft, the    former vice president of cloud architecture strategy at    AWS,now an advisor    for Nubank, in an interview with The New Stack. Its a    Step Functions-to-microservices story. And I think one of the    problems is the wrong labeling.  <\/p>\n<p>    He pointed out that in many applications, especially internal    applications, the cost of development exceeds the runtime    costs. In these cases, Step Functions make a lot of sense to    save dev time, but can cost for heavy workloads.  <\/p>\n<p>    If you know youre going to eventually do it at some scale,    said Cockcroft, you may build it differently in the first    place. So the question is, do you know how to do the thing, and    do you know the scale youre going to run it at? Cockcroft    said.  <\/p>\n<p>    The Google paper tackles this issue by making lives easier for    the developer while letting the runtime infrastructure bets    figure out the most cost-effective way to run these    applications.  <\/p>\n<p>    By delegating all execution responsibilities to the runtime,    our solution is able to provide the same benefits as    microservices but with much higher performance and reduced    costs, the Google researchers wrote.  <\/p>\n<p>    This year has been a lot of basic architectural    reconsiderations, and microservices are not the only ideal    being questioned.  <\/p>\n<p>    Cloud computing, for instance, has also come under scrutiny.  <\/p>\n<p>    In June, 37signals, which    runs both Basecamp and the Hey email application, procured a    fleet of Dell servers, and left the    cloud, bucking a decades tradition of moving operations    off-prem for vaguely defined greater efficiencies.  <\/p>\n<p>    This is the central deceit of the cloud marketing, that its    all going to be so much easier that you hardly need anyone to    operate it, David Heinemeier Hansson explained in a blog post. Ive never    seen it. Not at 37signals, not from anyone else running large    internet applications. The cloud has some advantages, but its    typically not in a reduced operations headcount.  <\/p>\n<p>    Of course, DHH is a race car driver,    so naturally he wants to dig into the bare metal. But there are    others willing to back this bet. Later this year, Oxide    Computers launched their new systems hoping to serve others with a similar    sentiment: running cloud computing workloads, but more    cost-effectively in their own data centers.  <\/p>\n<p>    And this sentiment seems to be at least considered more now    that the cloud bills are coming due. FinOps became a noticeable thing in    2023, as more organizations turned to companies like    KubeCost to control their cloud spend. And how many people    were taken aback by the news that a DataDog customer received a $65 million bill for cloud    monitoring?  <\/p>\n<p>    Arguably, a $65 million observability bill might be worth it    for an outfit that generates billions in revenue. But as chief    architects take a harder look at engineering decisions made in    the last decade, they may decide to make a few adjustments. And    microservices will not be an exception.  <\/p>\n<p>    TNS cloud native correspondent Scott    M. Fulton III contributed to this report.  <\/p>\n<p>        YOUTUBE.COM\/THENEWSTACK      <\/p>\n<p>        Tech moves fast, don't miss an episode. Subscribe to our        YouTube channel to stream all our podcasts, interviews,        demos, and more.      <\/p>\n<p>      SUBSCRIBE    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read this article:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/thenewstack.io\/year-in-review-was-2023-a-turning-point-for-microservices\/\" title=\"Year-in-Review: 2023 Was a Turning Point for Microservices - The New Stack\">Year-in-Review: 2023 Was a Turning Point for Microservices - The New Stack<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Maybe we are doing microservices all wrong? This was the main thesis of Towards Modern Development of Cloud Applications (PDF), a paper from a bunch of Googlers (led by Google software engineer Michael Whittaker) that was presented in June at HOTOS 23: Proceedings of the 19th Workshop on Hot Topics in Operating Systems <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/cloud-computing\/year-in-review-2023-was-a-turning-point-for-microservices-the-new-stack\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[257743],"tags":[],"class_list":["post-1120351","post","type-post","status-publish","format-standard","hentry","category-cloud-computing"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1120351"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1120351"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1120351\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1120351"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1120351"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1120351"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}