Solving the data conundrum with HPC and AI – ITProPortal

Posted: December 15, 2021 at 9:58 am

Supercomputing has come a long way since its beginnings in the 1960s. Initially, many supercomputers were based on mainframes, however, their cost and complexity were significant barriers to entry for many institutions. The idea of utilizing multiple low-cost PCs over a network to provide a cost-effective form of parallel computing led research institutions along the path of high-performance computing (HPC) clusters starting with "Beowulf clusters in the 90s.

Beowulf clusters are very much the predecessors to todays HPC clusters. The fundamentals of the Beowulf architecture are still relevant to modern-day HPC deployments; however, multiple desktop PCs have been replaced with purpose-built, high-density server platforms. Networking has significantly improved, with High Bandwidth/Low Latency InfiniBand (or, as a nod to the past, increasingly Ethernet) and high-performance parallel filesystems such as SpectrumScale, Lustre and BeeGFS have been developed to allow the storage to keep up with the compute. The development of excellent, often open-source, tools for managing high-performance distributed computing has also made adoption a lot easier.

More recently, we have witnessed the advancement of HPC from the original, CPU-based clusters to systems that do the bulk of their processing on Graphic Processing Units (GPUs), resulting in the growth of GPU accelerated computing.

While HPC was scaling up with more compute resource, the data was growing at a far faster pace. Since the outset of 2010, there has been a huge explosion in unstructured data from sources like webchats, cameras, sensors, video communications and so on. This has presented big data challenges for storage, processing, and transfer. Newer technology paradigms such as big data, parallel computing, cloud computing, Internet of Things (IoT) and artificial intelligence (AI) came into the mainstream to cope with the problems caused by the data onslaught.

What these paradigms all have in common is that they are capable of being parallelized to a high degree. HPCs GPU parallel computing has been a real game-changer for AI as parallel computing can process all this data, in a short amount of time using GPUs. As workloads have grown, so too have GPU parallel computing and AI machine learning. Image analysis is a good example of how the power of GPU computing can support an AI project. With one GPU it would take 72 hours to process an imaging deep learning model, but it only takes 20 minutes to run the same AI model on an HPC cluster with 64 GPUs.

Beowulf is still relevant to AI workloads. Storage, networking, and processing are important to make AI projects work at scale, this is when AI can make use of the large-scale, parallel environments that HPC infrastructure (with GPUs) provides to help process workloads quickly. Training an AI model takes more far more time than testing one. The importance of coupling AI with HPC is that it significantly speeds up the training stage and boosts the accuracy and reliability of AI models, whilst keeping the training time to a minimum.

The right software is needed to support the HPC and AI combination. There are traditional products and applications that are being used to run AI workloads from within HPC environments, as many share the same requirements for aggregating large pools of resources and managing them. However, everything from the underlying hardware, the schedulers used, Message Passing Interface (MPI) and even to how software is packaged up is beginning to change towards more flexible models, and a rize in hybrid environments is a trend that we expect to see continue.

As traditional use cases for HPC applications are so well established, changes often happen relatively slowly. However, the updates for many HPC applications are only necessary every 6 to 12 months. On the other hand, AI development is happening so fast, updates and new applications, tools and libraries are being released roughly daily.

If you employed the same update strategies to manage your AI as you do for your HPC platforms, you would get left behind. That is why a solution like NVIDIAs DGX containerized platform allows you to quickly and easily keep up to date with rapid developments from NVIDIA GPU CLOUD (NGC), an online database of AI and HPC tools encapsulated in easy to consume containers.

It is becoming standard practice within the HPC community to use a containerized platform for managing instances that are beneficial for AI deployment. Containerization has accelerated support for AI workloads on HPC clusters.

AI models can be used to predict the outcome of a simulation without having to run the full, resource-intensive, simulation. By using an AI model in this way input variables/design points of interest can be narrowed down to a candidate list quickly and at much lower cost. These candidate variables can be run through the known simulation to verify the AI models prediction.

Quantum Molecular Simulations (QMS), Chip Design and Drug Discovery are areas this technique is increasingly being applied, IBM also recently launched a product that does exactly this known as IBM Bayesian Optimization Accelerator (BOA).

Start with a few simple questions; How big is my problem? How fast do I want my results back? How much data do I have to process? How many users are sharing the resource?

HPC techniques will help the management of an AI project if the existing dataset is substantial, or if contention issues are being experienced on the infrastructure from having multiple users. If you are at a point where you need to put four GPUs in a workstation and this is becoming a problem by causing a bottleneck, you need to consult with an HPC integrator, with experience in scaling up infrastructure for these types of workloads.

Some organizations might be running AI workloads on a large machine or multiple machines with GPUs and your AI infrastructure might look more like HPC infrastructure than you realize. There are HPC techniques, software and other aspects that can really help to manage that infrastructure. The infrastructure looks quite similar, but there are some clever ways of installing and managing it specifically geared towards AI modeling.

Storage is very often overlooked when organizations are building infrastructure for AI workloads, and you may not be getting the full ROI on your AI infrastructure if your compute is waiting for your storage to be freed up. It is important to seek the best advice for sizing and deploying the right storage solution for your cluster.

Big data doesnt necessarily need to be that big, it is just when it reaches that point when it becomes unmanageable for an organization. When you cant get out of it what you want, then it becomes too big for you. HPC can provide the compute power to deal with the large amounts of data in AI workloads.

It is an exciting time for both HPC and AI, as we are seeing incremental adaptation by both technologies. The challenges are getting bigger every day, with newer and more distinct problems which need faster solutions. For example, countering cyber-attacks, discovering new vaccines, detecting enemy missiles and so on.

It will be interesting to see what happens next in terms of inclusion of 100% containerized environments onto HPC clusters, and technologies such as Singularity and Kubernetes environments.

Schedulers today initiate jobs and wait until they finish which may not be an ideal scenario for AI environments. More recently, newer schedulers monitor the real-time performance and execute jobs based on priority and runtime and will be able to work alongside containerization technologies and environments such as Kubernetes to orchestrate the resource needed.

Storage will become increasingly important to support large deployments, as vast volumes of data need to be stored, classified, labeled, cleansed, and moved around quickly. Infrastructure such as flash storage and networking become vital to your project, alongside storage software that can scale with demand.

Both HPC and AI will continue to have an impact on both organizations and each other and their symbiotic relationship will only grow stronger as both traditional HPC users and AI infrastructure modelers realize the full potential of each other.

Vibin Vijay, AI Product Specialist, OCF

Read more:

Solving the data conundrum with HPC and AI - ITProPortal

Related Posts