{"id":1028791,"date":"2024-07-11T02:46:53","date_gmt":"2024-07-11T06:46:53","guid":{"rendered":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/this-enormous-computer-chip-beat-the-worlds-top-supercomputer-at-molecular-modeling-singularity-hub.php"},"modified":"2024-07-11T02:46:53","modified_gmt":"2024-07-11T06:46:53","slug":"this-enormous-computer-chip-beat-the-worlds-top-supercomputer-at-molecular-modeling-singularity-hub","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/singularity\/this-enormous-computer-chip-beat-the-worlds-top-supercomputer-at-molecular-modeling-singularity-hub.php","title":{"rendered":"This Enormous Computer Chip Beat the Worlds Top Supercomputer at Molecular Modeling &#8211; Singularity Hub"},"content":{"rendered":"<p><p>    Computer chips are a hot commodity. Nvidia is now one of the    most valuable companies in the world, and the Taiwanese    manufacturer of Nvidias chips, TSMC, has been called a        geopolitical force. It should come as no surprise, then,    that a growing number of hardware startups and established    companies are looking to take a jewel or two from the crown.  <\/p>\n<p>    Of these, Cerebras is one of the weirdest. The company makes        computer chips the size of tortillas bristling with just    under a million processors, each linked to its own local    memory. The processors are small but lightning quick as they    dont shuttle information to and from shared memory located far    away. And the connections between processorswhich in most    supercomputers require linking separate chips across room-sized    machinesare quick too.  <\/p>\n<p>    This means the chips are stellar for specific tasks. Recent    preprint studies in two of theseone simulating molecules and    the other training and running large language modelsshow the    wafer-scale advantage can be formidable. The chips outperformed    Frontier,     the worlds top supercomputer, in the former. They also    showed a stripped down AI model could use a third of the usual    energy without sacrificing performance.  <\/p>\n<p>    The materials we make things with are crucial drivers of    technology. They usher in new possibilities by breaking old    limits in strength or heat resistance. Take     fusion power. If researchers can make it work, the    technology promises to be a new, clean source of energy. But    liberating that energy requires materials to withstand extreme    conditions.  <\/p>\n<p>    Scientists use supercomputers to model how the metals lining    fusion reactors might deal with the heat. These simulations    zoom in on individual atoms and use the laws of physics to    guide their motions and interactions at grand scales. Todays    supercomputers can model materials containing billions or even    trillions of atoms with high precision.  <\/p>\n<p>    But while the scale and quality of these simulations has    progressed a lot over the years, their speed has stalled. Due    to the way supercomputers are designed, they can only model so    many interactions per second, and making the machines bigger    only compounds the problem. This means the total length of    molecular simulations has a hard practical limit.  <\/p>\n<p>    Cerebras partnered with Sandia, Lawrence Livermore, and Los    Alamos National Laboratories to see if a wafer-scale chip    could speed things up.  <\/p>\n<p>    The team assigned a single simulated atom to each processor. So    they could quickly exchange information about their position,    motion, and energy, the processors modeling atoms that would be    physically close in the real world were neighbors on the chip    too. Depending on their properties at any given time, atoms    could hop between processors as they moved about.  <\/p>\n<p>    The team modeled 800,000 atoms in three materialscopper,    tungsten, and tantalumthat might be useful in fusion reactors.    The results were pretty stunning, with simulations of tantalum    yielding a 179-fold speedup over the Frontier supercomputer.    That means the chip could crunch a years worth of work on a    supercomputer into a few days and significantly extend the    length of simulation from microseconds to milliseconds. It was    also vastly more efficient at the task.  <\/p>\n<p>    I have been working in atomistic simulation of materials for    more than 20 years. During that time, I have participated in    massive improvements in both the size and accuracy of the    simulations. However, despite all this, we have been unable to    increase the actual simulation rate. The wall-clock time    required to run simulations has barely budged in the last 15    years, Aidan Thompson of Sandia National Laboratories     said in a statement. With the Cerebras Wafer-Scale Engine,    we can all of a sudden drive at hypersonic speeds.  <\/p>\n<p>    Although the chip increases modeling speed, it cant compete on    scale. The number of simulated atoms is limited to the number    of processors on the chip. Next steps include assigning    multiple atoms to each processor and using new wafer-scale    supercomputers that link     64 Cerebras systems together. The team estimates these    machines could model as many as 40 million tantalum atoms at    speeds similar to those in the study.  <\/p>\n<p>    While simulating the physical world could be a core competency    for wafer-scale chips, theyve always been focused on    artificial intelligence. The latest AI models have grown    exponentially, meaning the energy and cost of training and    running them has exploded. Wafer-scale chips may be able to    make AI more efficient.  <\/p>\n<p>    In a separate    study, researchers from Neural Magic and Cerebras worked to    shrink the size of Metas 7-billion-parameter Llama language    model. To do this, they made whats called a sparse AI model    where many of the algorithms parameters are set to zero. In    theory, this means they can be skipped, making the algorithm    smaller, faster, and more efficient. But todays leading AI    chipscalled graphics processing units (or GPUs)read    algorithms in chunks, meaning they cant skip every zeroed    out parameter.  <\/p>\n<p>    Because memory is distributed across a wafer-scale chip, it    can read every parameter and skip zeroes wherever they    occur. Even so, extremely sparse models dont usually perform    as well as dense models. But here, the team found a way to    recover lost performance with a little extra training. Their    model maintained performanceeven with 70 percent of the    parameters zeroed out. Running on a Cerebras chip, it sipped a    meager 30 percent of the energy and ran in a third of the time    of the full-sized model.  <\/p>\n<p>    While all this is impressive, Cerebras is still niche. Nvidias    more conventional chips remain firmly in control of the market.    At least for now, that appears unlikely to change. Companies    have invested heavily in expertise and infrastructure built    around Nvidia.  <\/p>\n<p>    But wafer-scale may continue to prove itself in niche, but    still crucial, applications in research. And it may be the    approach becomes more common overall. The ability to make    wafer-scale chips is only now being perfected. In a hint at    whats to come for the field as a whole, the biggest chipmaker    in the world, TSMC, recently said its building    out its wafer-scale capabilities. This could make the chips    more common and capable.  <\/p>\n<p>    For their part, the team behind the molecular modeling work say    wafer-scales influence could be more dramatic. Like GPUs    before them, adding wafer-scale chips to the supercomputing mix    could yield some formidable machines in the future.  <\/p>\n<p>    Future work will focus on extending the strong-scaling    efficiency demonstrated here to facility-level deployments,    potentially leading to an even greater paradigm shift in the    Top500 supercomputer list than that introduced by the GPU    revolution, the team wrote in their paper.  <\/p>\n<p>    Image Credit: Cerebras  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/singularityhub.com\/2024\/07\/09\/this-enormous-computer-chip-beat-the-worlds-top-supercomputer-at-molecular-modeling\/\" title=\"This Enormous Computer Chip Beat the Worlds Top Supercomputer at Molecular Modeling - Singularity Hub\">This Enormous Computer Chip Beat the Worlds Top Supercomputer at Molecular Modeling - Singularity Hub<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Computer chips are a hot commodity. Nvidia is now one of the most valuable companies in the world, and the Taiwanese manufacturer of Nvidias chips, TSMC, has been called a geopolitical force. It should come as no surprise, then, that a growing number of hardware startups and established companies are looking to take a jewel or two from the crown <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/singularity\/this-enormous-computer-chip-beat-the-worlds-top-supercomputer-at-molecular-modeling-singularity-hub.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[431648],"tags":[],"class_list":["post-1028791","post","type-post","status-publish","format-standard","hentry","category-singularity"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1028791"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=1028791"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1028791\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=1028791"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=1028791"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=1028791"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}