Can public clouds fix the developer experience in the HPC domain? – Forbes

Posted: February 2, 2021 at 7:20 pm

Rescale

Countless startup fortunes have been made over the past decade by new technologies aimed at the software developer experience in the wake of Marc Andreessens observation that software is eating the world.

From publicly-traded Atlassian ($54B market cap), to notable acquisitions like (Microsoft acquiring GitHub for $7.5B in stock), to a slew of private unicorns whose technologies address various optimizations of how software is developed and managed the idea that developers are the lifeblood of product innovation and revenue growth in every industry has become a universal truth and a very lucrative technology category.

There so many cultish new mantras for the right way to do software that have crept into mainstream business jargon that its hard to keep up from agile to move fast and break things, to developer processes like continuous integration / continuous deployment and DevOps.

Anything fast-growth companies can do to attract, hire and accelerate the productivity of developers has become a universally accepted guiding principle of doing business in the Internet era, with no signs of slowing down.

But while this developer experience has been a key focus in the mainstream business world, somehow in the research and science domain engineers today are still largely mired in a very old world slog in how they access their computational resources. They literally line up to get their turn to run computing jobs on their specialized high-performance computer clusters. These are expensive PhD headcounts. Waiting in line.

In this world where supercomputers and massive Linux clusters (aka high performance computing, or HPC) are the norm for everything from quantum physics, to nuclear fission, to propulsion to aerodynamics these science engineers sit on hold to run these algorithmically complex simulations while mainstream developers are pressing a button in Amazon Web Services to deploy a new server instantaneously in the cloud.

All of the mega cloud service providers (AWS / Azure / Google Cloud) are licking their chops to capture this HPC market that Intersect360 Research expects to reach $55 billion by 2024. Only an estimated 20 percent of HPC workloads presently run on the cloud, while more than 85 percent of companies overall will eventually have most workloads running on the cloud. Just not yet. Thats a huge lag in cloud adoption for HPC.

Right in the middle of the action of capturing this HPC cloud market and raising the developer experience of its engineers is a San Francisco- based company called Rescale. They recently closed a $50 million Series C funding rund. They bring specialized HPC hardware and software to the cloud, in pre-configured templates.

Rescale co-founder and CEO Joris Poort

Its co-founders, Joris Poort and Adam McKenzie, are former aerospace engineers at Boeing who designed complex physics simulations for wing design on the Boeing 787. Their experiences led them to realize just how broken the computing model was in the digital R&D domain. Necessity being the mother of invention, they went on to start Rescale based on the realization that eventually most HPC workloads would not run on hardware bought and maintained in private data centers but more and more on public cloud infrastructure in the same way that mainstream developers do in the enterprise.

The Rescale platform is the scientific communitys first cloud platform optimized for algorithmically-complex workloads, including simulation and artificial intelligence, plus integrations with more than 600 of the worlds most-popular HPC software applications and more than 80 specialized hardware architectures.

Rescale's Web-interface dashboard.

Rescale allows any science engineer to run any workload, on any major public cloud, including AWS, Google Cloud, IBM, Microsoft Azure, Oracle and more.

The company wants to bring the same developer ergonomics to digital R&D that their counterparts have enjoyed in the enterprise for nearly a decade, and so far they have attracted more than 300 customers, including Boom Supersonic, Nissan, and other large users with massive computational requirements that drive their simulations and product designs.

To truly empower R&D teams advancing the state of the art in science, HPC workloads should run not only in private data centers, but also on public cloud infrastructure that offers elastic compute, said Nagraj Kashyap, Global Head of M12, Microsoft's venture fund. Rescales customers get that, ultimately speeding up simulation and design cycles by orders of magnitude."

Among the investors in Rescale ($100 million in total funding) are Samsung and NVIDIA. Its not just the cloud service providers that are licking their chops at this lucrative HPC industry. There are untold billions to be made in the sale of specialized hardware architectures that power the artificial intelligence- driven simulations that power so much product discovery in the science domain, where products arent physically created until they have been digitally represented and tested against every possible variable.

Read more:

Can public clouds fix the developer experience in the HPC domain? - Forbes

Related Post