{"id":1120301,"date":"2023-12-22T19:55:03","date_gmt":"2023-12-23T00:55:03","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/ibm-demonstrates-useful-quantum-computing-within-133-qubit-heron-announces-entry-into-quantum-centric-toms-hardware\/"},"modified":"2023-12-22T19:55:03","modified_gmt":"2023-12-23T00:55:03","slug":"ibm-demonstrates-useful-quantum-computing-within-133-qubit-heron-announces-entry-into-quantum-centric-toms-hardware","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/quantum-computing\/ibm-demonstrates-useful-quantum-computing-within-133-qubit-heron-announces-entry-into-quantum-centric-toms-hardware\/","title":{"rendered":"IBM demonstrates useful Quantum computing within 133-qubit Heron, announces entry into Quantum-centric &#8230; &#8211; Tom&#8217;s Hardware"},"content":{"rendered":"<p><p>    At its Quantum Summit 2023, IBM took the stage    with an interesting spirit: one of almost awe at having things    go their way. But the quantum of today  the one thats    changing IBMs roadmap so deeply on the back of breakthrough    upon breakthrough  was hard enough to consolidate. As IBM sees    it, the future of     quantum computing will hardly be more permissive. IBM    announced cutting-edge devices at the event, including the    133-qubit Heron Quantum Processing Unit (QPU), which is the    company's first utility-scale quantum processor, and the    self-contained Quantum System Two, a quantum-specific    supercomputing architecture. And further improvements to the    cutting-edge devices are ultimately required.  <\/p>\n<p>    Each breakthrough that afterward becomes obsolete is another    accelerating bump against what we might call quantum's \"plateau    of understanding.\" Weve already crested this plateau with    semiconductors, so much so that the latest CPUs and GPUs are    reaching practical, fundamental design limits where quantum    effects start ruining our math. Conquering the plateau means    that utility and understanding are now enough for research and    development to be somewhat self-sustainable  at least for a    Moores-law-esque while.  <\/p>\n<p>    IBMs Quantum    Summit serves as a bookend of sorts for the companys    cultural and operational execution, and its 2023 edition    showcased an energized company that feels like it's opening up    the doors towards a \"quantum-centric supercomputing era.\" That    vision is built on the company's new Quantum Processing Unit,    Heron, which showcases scalable quantum utility at a 133-qubit    count and already offers things beyond what any feasible    classical system could ever do. Breakthroughs and a revised    understanding of its own roadmap have led IBM to present its    quantum vision in two different roadmaps, prioritizing    scalability in tandem with useful, minimum-quality rather than    monolithic, hard-to-validate, high-complexity products.  <\/p>\n<p>    IBM's announced new plateau for quantum computing packs in two    particular breakthroughs that occurred in 2023. One    breakthrough relates to a groundbreaking noise-reduction    algorithm (Zero Noise Extrapolation, or ZNE) which we covered        back in July  basically a system through which you can    compensate for noise. For instance, if you know a pitcher tends    to throw more to the left, you can compensate for that up to a    point. There will always be a moment where you correct too much    or cede ground towards other disruptions (such as the opponent    exploring the overexposed right side of the court). This is    where the concept of qubit quality comes into account  the    more quality your qubits, the more predictable both their    results and their disruptions and the better you know their    operational constraints  then all the more useful work you can    extract from it.  <\/p>\n<p>    The other breakthrough relates to an algorithmic improvement of    epic proportions and was first pushed to Arxiv on    August 15th, 2023. Titled High-threshold and low-overhead    fault-tolerant quantum memory, the paper showcases algorithmic    ways to reduce qubit needs for certain quantum calculations by    a     factor of ten. When what used to cost 1,000 qubits and a    complex logic gate architecture sees a tenfold cost reduction,    its likely youd prefer to end up with 133-qubit-sized chips     chips that crush problems previously meant for 1,000 qubit    machines.  <\/p>\n<p>    Enter IBMs Heron Quantum Processing Unit (QPU) and the era of    useful, quantum-centric supercomputing.  <\/p>\n<\/p>\n<p>        Image 1 of 2      <\/p>\n<p>    The two-part breakthroughs of error correction (through the ZNE    technique) and algorithmic performance (alongside qubit gate    architecture improvements) allow IBM to now consider reaching 1    billion operationally useful quantum gates by 2033. It just so    happens that its an amazing coincidence (one born of research    effort and human ingenuity) that we only need to keep 133    qubits relatively happy within their own environment for us to    extract useful quantum computing from them  computing that we    wouldnt classically be able to get anywhere else.  <\/p>\n<p>    The Development and Innovation roadmap showcase how IBM is    thinking about its superconducting qubits: as weve learned to    do with semiconductors already, mapping out the hardware-level    improvements alongside the scalability-level ones. Because as    weve seen through our supercomputing efforts, theres no such    thing as a truly monolithic approach: every piece of    supercomputing is (necessarily) efficiently distributed across    thousands of individual accelerators. Your CPU performs better    by knitting and orchestrating several different cores,    registers, and execution units. Even Cerebras Wafer Scale    Engine scales further outside its wafer-level computing unit.    No accelerator so far  no unit of computation - has proven    powerful enough that we dont need to unlock more of its power    by increasing its area or computing density. Our brains and    learning ability seem to provide us with the only known    exception.  <\/p>\n<p>    IBMs modular approach and its focus on introducing more robust    intra-QPU and inter-QPU communication for this years Heron    shows its aware of the rope it's walking between quality and    scalability. The thousands of hardware and scientist hours    behind developing the tunable couplers that are one of the    signature Heron design elements that allow parallel execution    across different QPUs is another. Pushing one lever harder    means other systems have to be able to keep up; IBM also plans    on steadily improving its internal and external coupling    technology (already developed with scalability in mind for    Heron) throughout further iterations, such as Flamingos    planned four versions which still only end scaling up to 156    qubits per QPU.  <\/p>\n<p>    Considering how you're solving scalability problems and the    qubit quality x density x ease of testing equation, the    ticks - the density increases that don't sacrifice    quality and are feasible from a testing and productization    standpoint - may be harder to unlock. But if one side of    development is scalability, the other relates to the quality of    whatever youre actually scaling  in this case, IBMs    superconducting qubits themselves. Heron itself saw a    substantial rearrangement of its internal qubit architecture to    improve gate design, accessibility, and quantum processing    volumes  not unlike an Intel tock. The planned iterative    improvements to Flamingo's design seem to confirm this.  <\/p>\n<p>    Theres a sweet spot for the quantum computing algorithms of    today: it seems that algorithms that fit roughly around a    60-gate depth are complex enough to allow for        useful quantum computing. Perhaps thinking about Intels    NetBurst architecture with its Pentium 4 CPUs is appropriate    here: too deep an instruction pipeline is counterproductive,    after a point. Branch mispredictions are terrible across    computing, be it classical or quantum. And quantum computing     as we still currently have it in our Noisy Intermediate-Scale    Quantum (NISQ)-era  is more vulnerable to a more varied    disturbance field than semiconductors (there are world    overclocking records where we chill our processors to sub-zero    temperatures and pump them with above-standard volts, after    all). But perhaps that comparable quantum vulnerability is    understandable, given how were essentially manipulating the    essential units of existence  atoms and even subatomic    particles  into becoming useful to us.  <\/p>\n<p>    Useful quantum computing doesnt simply correlate with an    increasing number of available in-package qubits (announcements    of 1,000-qubit products based on     neutral atom technology, for instance). But useful quantum    computing is always stretched thin throughout its limits, and    if it isnt bumping against one fundamental limit (qubit    count), its bumping against another (instability at higher    qubit counts); or contending with issues of entanglement    coherence and longevity; entanglement distance and capability;    correctness of the results; and still other elements. Some of    these scalability issues can be visualized within the same    framework of efficient data transit between different    distributed computing units, such as cores in a given CPU    architecture, which can themselves be solved in a number of    ways, such as hardware-based information processing and routing    techniques (AMDs Infinity Fabric comes to mind, as does    Nvidia's NVLink).  <\/p>\n<p>    This feature of quantum computing already being useful at the    133-qubit scale is also part of the reason why IBM keeps    prioritizing quantum computing-related challenges    around useful algorithms occupying a 100 by 100 grid. That    quantum is already useful beyond classical, even in gate grids    that are comparably small to what we can achieve with    transistors, and points to the scale of the transition  of how    different these two computational worlds are.  <\/p>\n<p>    Then there are also the matters of error mitigation and error    correction, of extracting ground-truth-level answers to the    questions we want our quantum computer to solve. There are also    limitations in our way of utilizing quantum interference in order to collapse a quantum    computation at just the right moment that we know we will    obtain from it the result we want  or at least something close    enough to correct that we can then offset any noise (non-useful    computational results, or the difference of values ranging    between the correct answer and the not-yet-culled wrong ones)    through a clever,     groundbreaking algorithm.  <\/p>\n<p>    The above are just some of the elements currently limiting how    useful qubits can truly be and how those qubits can be    manipulated into useful, algorithm-running computation units.    This is usually referred to as a qubits quality, and we can    see how it both does and doesnt relate to the sheer number of    qubits available. But since many useful computations can    already be achieved with 133-qubit-wide Quantum Processing    Units (theres a reason IBM settled on a mere 6-qubit increase    from Eagle towards Heron, and only scales up to 156 units with    Flamingo), the company is setting out to keep this optimal    qubit width for a number of years of continuous redesigns. IBM    will focus on making correct results easier to extract from    Heron-sized QPUs by increasing the coherence, stability, and    accuracy of these 133 qubits while surmounting the arguably    harder challenge of distributed, highly-parallel quantum    computing. Its a onetwo punch again, and one that comes from    the bump in speed at climbing ever-higher stretches of the    quantum computing plateau.  <\/p>\n<\/p>\n<p>    But there is an admission that its a barrier that IBM still    wants to punch through  its much better to pair 200 units of    a 156-qubit QPU (that of Flamingo) than of a 127-qubit one such    as Eagle, so long as efficiency and accuracy remain high.    Oliver Dial says that Condor, \"the 1,000-qubit product\", is    locally running  up to a point. It was meant to be the    thousand-qubit processor, and was a part of the roadmap for    this years Quantum Summit as much as the actual focus, Heron -    but its ultimately not really a direction the company thinks    is currently feasible.  <\/p>\n<p>    IBM did manage to yield all 1,000 Josephson Junctions within    their experimental Condor chip  the thousand-qubit halo    product that will never see the light of day as a product. Its    running within the labs, and IBM can show that Condor yielded    computationally useful qubits. One issue is that at that qubit    depth, testing such a device becomes immensely expensive and    time-consuming. At a basic level, its harder and more costly    to guarantee the quality of a thousand qubits and their    increasingly complex possibility field of interactions and    interconnections than to assure the same requirements in a    133-qubit Heron. Even IBM only means to test around a quarter    of the in-lab Condor QPUs area, confirming that the qubit    connections are working.  <\/p>\n<p>    But Heron? Heron is made for quick verification that its    working to spec  that its providing accurate results, or at    least computationally useful results that can then be corrected    through ZNE and other techniques. That means you can get useful    work out of it already, while also being a much better    time-to-market product in virtually all areas that matter.    Heron is what IBM considers the basic unit of quantum    computation - good enough and stable enough to outpace    classical systems in specific workloads. But that is    quantum computing, and that is its niche.  <\/p>\n<\/p>\n<p>    Heron is IBMs entrance into the mass-access era of Quantum    Processing Units. Next years Flamingo builds further into the    inter-QPU coupling architecture so that further parallelization    can be achieved. The idea is to scale at a base, post-classical    utility level and maintain that as a minimum quality baseline.    Only at that point will IBM maybe scale density and unlock the    appropriate jump in computing capability - when that can be    similarly achieved in a similarly productive way, and    scalability is almost perfect for maintaining quantum    usefulness.  <\/p>\n<p>    Theres simply never been the need to churn out hundreds of    QPUs yet  the utility wasnt there. The Canaries, Falcons, and    Eagles of IBMs past roadmap were never meant to usher in an    age of scaled manufacturing. They were prototypes, scientific    instruments, explorations; proofs of concept on the road    towards useful quantum computing. We didnt know where    usefulness would start to appear. But now, we do  because    weve reached it.  <\/p>\n<p>    Heron is the design IBM feels best answers that newly-created    need for a quantum computing chip that actually is at the    forefront of human computing capability  one that can offer    what no classical computing system can (in some specific    areas). One that can slice through specific-but-deeper layers    of our Universe. Thats what IBM means when it calls this new    stage the quantum-centric supercomputing one.  <\/p>\n<p>    Classical systems will never cease to be necessary: both of    themselves and the way they structure our current reality,    systems, and society. They also function as a layer that allows    quantum computing itself to happen, be it by carrying and    storing its intermediate results or knitting the final    informational state  mapping out the correct answer Quantum    computing provides one quality step at a time. The    quantum-centric bit merely refers to how quantum computing will    be the core contributor to developments in fields such as    materials science, more advanced physics, chemistry,    superconduction, and basically every domain where our classical    systems were already presenting a duller and duller edge with    which to improve upon our understanding of their limits.  <\/p>\n<\/p>\n<p>    However, through IBMs approach and its choice of transmon    superconducting qubits, a certain difficulty lies in    commercializing local installations. Quantum System Two, as the    company is naming its new almost wholesale quantum computing    system, has been shown working with different QPU installations    (both Heron and Eagle). When asked about whether scaling    Quantum System Two and similar self-contained products would be    a bottleneck towards technological adoption, IBMs CTO Oliver    Dial said that it was definitely a difficult problem to solve,    but that he was confident in their ability to reduce costs and    complexity further in time, considering how successful IBM had    already proven in that regard. For now, its easier for IBMs    quantum usefulness to be unlocked at a distance  through the    cloud and its quantum computing framework, Quiskit  than it is    to achieve it by running local installations.  <\/p>\n<p>    Quiskit is the preferred medium through which users can    actually deploy IBM's quantum computing products in research    efforts  just like you could rent X Nvidia A100s of processing    power through Amazon Web    Services or even a simple Xbox Series X console through    Microsofts    xCloud service. On the day of IBM's Quantum Summit, that    freedom also meant access to the useful quantum circuits within    IBM-deployed Heron QPUs. And it's much easier to scale access    at home, serving them through the cloud, than delivering a box    of supercooled transmon qubits ready to be plugged and played    with.  <\/p>\n<\/p>\n<p>    Thats one devil of IBMs superconducting qubits approach  not    many players have the will, funding, or expertise to put a    supercooled chamber into local operation and build the required    infrastructure around it. These are complex mechanisms housing    kilometers of wiring - another focus of IBMs development and    tinkering culminating in last years flexible ribbon solution,    which drastically simplified connections to and from QPUs.  <\/p>\n<p>    Quantum computing is a uniquely complex problem, and    democratized access to hundreds or thousands of mass-produced    Herons in IBMs refrigerator-laden fields will ultimately only    require, well a stable internet connection. Logistics are what    they are, and IBMs Quantum Summit also took the necessary    steps to address some needs within its Quiskit runtime platform    by introducing its official 1.0 version. Food for thought is    realizing that the era of useful quantum computing seems to    coincide with the beginning of the era of Quantum Computing as    a service as well. That was fast.  <\/p>\n<p>    The era of useful, mass-producible, mass-access quantum    computing is what IBM is promising. But now, theres the matter    of scale. And theres the matter of how cost-effective it is to    install a Quantum System Two or Five or Ten compared to another    qubit approach  be it topological approaches to quantum    computing, or oxygen-vacancy-based, ion-traps, or others that    are an entire architecture away from IBMs approach, such as        fluxonium qubits. Its likely that a number of qubit    technologies will still make it into the mass-production stage     and even then, we can rest assured that everywhere in the    road of human ingenuity lie failed experiments, like Intels    recently-decapitated Itanium or AMDs out-of-time approach to    x86 computing in Bulldozer.  <\/p>\n<p>    It's hard to see where the future of quantum takes us, and its    hard to say whether it looks exactly like     IBMs roadmap  the same roadmap whose running changes we    also discussed here. Yet all roadmaps are a permanently-drying    painting, both for IBM itself and the technology space at    large. Breakthroughs seem to be happening daily on each side of    the fence, and its a fact of science that the most potential    exists the earlier the questions we ask. The promising qubit    technologies of today will have to answer to actual    interrogations on performance, usefulness, ease and cost of    manipulation, quality, and scalability in ways that now need to    be at least as good as what IBM is proposing with its    transmon-based superconducting qubits, and its Herons, and    scalable Flamingos, and its (still unproven, but hinted at)    ability to eventually mass produce useful numbers of useful    Quantum Processing Units such as Heron. All of that even as we    remain in this noisy, intermediate-scale quantum (NISQ) era.  <\/p>\n<p>    Its no wonder that Oliver Dial looked and talked so    energetically during our interview: IBM has already achieved    quantum usefulness and has started to answer the two most    important questions  quality and scalability, Development, and    Innovation. And it did so through the collaboration of an    incredible team of scientists to deliver results years before    expected, Dial happily conceded. In 2023, IBM unlocked useful    quantum computing within a 127-qubit Quantum Processing Unit,    Eagle, and walked the process of perfecting it towards the    revamped Heron chip. Thats an incredible feat in and of    itself, and is what allows us to even discuss issues of    scalability at this point. Its the reason why a roadmap has to    shift to accommodate it  and in this quantum computing world,    its a great follow-up question to have.  <\/p>\n<p>    Perhaps the best question now is: how many things can we    improve with a useful Heron QPU? How many locked doors have    sprung ajar?  <\/p>\n<\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Visit link:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.tomshardware.com\/tech-industry\/quantum-computing\/ibm-demonstrates-useful-quantum-computing-within-133-qubit-heron-announces-entry-into-quantum-centric-supercomputing-era\" title=\"IBM demonstrates useful Quantum computing within 133-qubit Heron, announces entry into Quantum-centric ... - Tom's Hardware\">IBM demonstrates useful Quantum computing within 133-qubit Heron, announces entry into Quantum-centric ... - Tom's Hardware<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> At its Quantum Summit 2023, IBM took the stage with an interesting spirit: one of almost awe at having things go their way.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/quantum-computing\/ibm-demonstrates-useful-quantum-computing-within-133-qubit-heron-announces-entry-into-quantum-centric-toms-hardware\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[257742],"tags":[],"class_list":["post-1120301","post","type-post","status-publish","format-standard","hentry","category-quantum-computing"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1120301"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1120301"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1120301\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1120301"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1120301"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1120301"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}