Page 32«..1020..31323334..4050..»

Category Archives: Quantum Computing

Wall Streets top analysts say these are their favorite stocks right now – CNBC

Posted: August 23, 2022 at 12:10 am

Uncertainty was a key theme in the past week as the summer rally seemed to run out of steam.

As tempting as it is to follow the day-to-day movements of the market, investors would be better served to think long term and pick their stocks accordingly.

Here are five stocks chosen by Wall Street's top pros, according to TipRanks, a platform that ranks analysts based on their performance.

Computer technology firm IonQ (IONQ) has progressed significantly through the second quarter of this year, according to a recent research report from Needham analyst Quinn Bolton.

Important contracts, reinforced guidance for the full year, and other key developments were made in the second quarter. (See IonQ Earnings Date & Reports on TipRanks). Earlier this year, IonQ also launched its 32-qubit quantum computer, Aria.

Bolton notes that the company's strong balance sheet "should enable them to reach broad quantum advantage and become a positive cash flow generator without having to raise additional capital." Given the current market conditions and high cost of borrowing, this is good news.

The analyst also believes that the Aria 32-qubit will help IonQ achieve consistent system scaling and revenue bookings. Also, encouraged by the company's competitive edge provided by its trapped ion approach to quantum computing, Bolton believes that IonQ stands to benefit from the increasing popularity of the quantum industry and the growing investments being made to boost it.

Bolton reiterated a buy rating on IonQ with a price target of $9.

Bolton has a No.1 position among more than 8,000 analysts tracked on TipRanks. He has also had 73% success with his ratings, generating an average return of 45.2%.

Cyxtera (CYXT) is a provider of data center colocation and interconnection services for service providers, enterprises and government institutions. The company, like most of its peers in the tech sector, has been suffering from a challenging macro environment.

Moreover, in its recent second-quarter report, Cyxtera lowered its full-year 2022 guidance after factoring in foreign exchange headwinds, macroeconomic setbacks, delays in the implementation of its new Northern California data center and unfavorable timing for certain cost recoveries. (See Cyxtera Blogger Opinions & Sentiment on TipRanks).

However, RBC Capital analyst Jonathan Atkin pointed out a few upsides to the company's growth, which indicates that the CYXT stock can be a compelling buy for the longer-term.

The most important secular growth driver, according to Atkin, is the growing demand for data and connectivity as new technology and associated applications start rolling out. Additionally, the analyst also mentioned "rapid growth in IT outsourcing, data usage, and cloud and hybrid growth as enterprises realize digital transformation goals" as other positive factors.

Although current market conditions and operational environment prompted Atkin to decrease his price target to $14 from $16, he reiterated a buy rating on Cyxtera.

Atkin is currently at No. 11 among approximately 8,000 analysts tracked on the platform. Moreover, 78% of his ratings have been profitable, garnering 15.8% returns per rating on average.

The next on our list is the largest microchip manufacturer in the U.S., GlobalFoundries (GFS). The company recently beat its second-quarter goals, amid concerns of a demand slowdown in the consumer-exposed end markets like smartphones and PCs.

Reiterating a buy rating, Deutsche Bank analyst Ross Seymore explained that its increasing long-term agreement pipeline, focus on expanding its single-source business, growth in profitable unit volume, and meaningfully lower capital risk are expected to lift investor confidence in the stock. (See GlobalFoundries Stock Investors sentiments on TipRanks).

The analyst also raised the price target to $65 from $60 after attending the Analyst Day event held by Global Foundries following the Q2 print. Seymore was encouraged by "the company's ability to weather a macro/sector-specific slowdown while delivering continued increases in profitability driven by ASP growth, new single-sourced DWINs, and disciplined cost & OpEx management."

Seymore's track record gives us a solid reason to trust his research and opinion. At No.4 among more than 8,000 analysts followed on TipRanks, the analyst has a success rate of 80% on his ratings, generating average returns of 25.9%.

Retail chain Walmart's (WMT) recently released quarterly results reflected the resilience that consumers showed amid precarious market conditions. Not only that, operational improvements, continuous scaling of alternative income streams, and an innovative growth strategy are helping Walmart stay afloat.

Following the print, Baird analyst Peter Benedict reinforced a buy rating on the WMT stock and kept the price target at $140. (See Walmart Hedge Fund Trading Activity on TipRanks).

Benedict notes that Walmart's progress in optimizing inventory is a positive. "Looking ahead, additional pricing actions planned for 3Q should help WMT further right-size inventory levels/mix across 2H," the analyst wrote.

Moreover, Benedict also acknowledged the current leadership's efforts to keep Walmart ahead of others in the constantly evolving retail landscape. "CEO Doug McMillon's bold strategy to reshape WMT into a more nimble, fully integrated omni-channel retailer has generated real momentum across the business at a time when many traditional retailers are losing relevancy with consumers," the analyst said.

Benedict holds the No.77 position among around 8,000 analysts tracked on the platform. Moreover, his ratings have been successful 71% of the time, generating average returns of 16.1%.

Continuing our focus on the retail sector, leading home improvement chain Home Depot (HD) is another company that is on the buy list of Peter Benedict. The company also delivered upbeat second-quarter results alongside its peer Walmart.

Benedict believes that the management's unchanged outlook for the second half of this year reflects the possibility that the company expects some protection from any significant change in price-related demands through the rest of this year. (See Home Depot Stock Chart, Price History & Graphs on TipRanks).

The analyst is also confident that the company's strategic investments will bear fruit. "While HD has been realizing benefits from several of its strategic investments (front-end redesign/in-store navigation, merchandising resets, online assortment expansion, faster fulfillment options), momentum should continue to build as HD leverages its ecosystem of capabilities to deliver a seamless (and more personalized) shopping experience," said Benedict.

Reiterating a buy rating on Home Depot and raising the price target to $360 from $335, Benedict anticipates that the strategic investments made by the company last year will bolster its leadership position in the market and lead to share gains.

Link:

Wall Streets top analysts say these are their favorite stocks right now - CNBC

Posted in Quantum Computing | Comments Off on Wall Streets top analysts say these are their favorite stocks right now – CNBC

India now home to 3K deeptech startups that raised $2.7bn in 2021 – Punjab News Express

Posted: at 12:10 am

NEW DELHI: India is witnessing a boom in deeptech startups in niche areas like cybersecurity, quantum computing, AI and semiconductor, and the country is now home to more than 3, 000 such startups that raised $2.7 billion in 2021 -- a 1.6 times growth over 2020 -- a new Nasscom report said on Monday.

The country added over 210 deeptech startups in 2021 alone, and Bengaluru and Delhi-NCR are leading them in the country, according to a Nasscom-Zinnov report.

"The Indian deeptech ecosystem has also fortified the job creation with over 4, 000 people being employed across 14 potential deeptech unicorns and is expected to increase by 2X in headcount by 2026, " said Debjani Ghosh, President, Nasscom.

The country is now home to 500 inventive deeptech startups, creating workforce across technologies such as drones, robotics, 3D printing and AI -- with the potential to develop new intellectual properties backed by scientific advances and fundamental research.

The deeptech ecosystem has grown at a staggering rate of 53 per cent CAGR in the last 10 years, growing at par with the Indian tech startups.

Nearly 70-75 per cent deeptech startups have at least 15 per cent of their workforce skilled in deep technologies, the report mentioned.

"Strategic partnership with the government, academia, global investors, streamlined corporate collaboration and dedicated test-bed programmes can create a massive impact on India's deeptech story, " said Ghosh.

Although in a nascent stage compared to the US, Europe, Israel and China, the Indian deeptech ecosystem is expanding fast.

The industry is witnessing more start-ups emerging to solve global mega challenges of clean tech, zero hunger, smart cities and climate actions, the report said.

In 2021, over 270 unique startups raised $2.7 billion across 319 deals, with AI and big data and analytics being the top technologies raising equity investments.

The seed stage startups have witnessed a 2.3 times growth in equity investments in 2021, as compared to 2020, raising a total of $186 million funding in 2021.

Among verticals, supply chain management (SCM) and logistics were the most funded sectors in 2021, with deeptech startups raising funding across use cases like drone delivery, autonomous delivery bots, cold chain monitoring and fleet management, the report noted.

More here:

India now home to 3K deeptech startups that raised $2.7bn in 2021 - Punjab News Express

Posted in Quantum Computing | Comments Off on India now home to 3K deeptech startups that raised $2.7bn in 2021 – Punjab News Express

No One Gets Quantum Computing, Least Of All America’s National Institute of Standards and Technology – PC Perspective

Posted: August 6, 2022 at 7:47 pm

The only good news about Americas National Institute of Standards and Technology new Supersingular Isogeny Key Encapsulation, designed to be unbreakable by a quantum computer, is that it was subjected to extra testing before it became one of their four new quantum encryption algorithms. As it turns out, two Belgians named Wouter Castryck and Thomas Decru were able to break the Microsoft SIKE in under five minutes using a Intel Xeon CPU E5-2630v2 at 2.60GHz.

Indeed, they did it with a single core, which makes sense for security researchers well aware of the risks of running multithreaded; though why they stuck with a 22nm Ivy Bridge processor almost 10 years old is certainly a question. What makes even less sense is that encryption designed to resist quantum computing could be cracked by a traditional piece of silicon before the heat death of the universe.

This particular piece of quantum encryption has four parameter sets, called SIKEp434, SIKEp503, SIKEp610 and SIKEp751. The $50,000 bounty winners were able to crack SIKEp434 parameters in about 62 minutes. Two related instances, $IKEp182 and $IKEp217 they were able to crack in about 4 minutes and 6 minutes respectively. There are three other quantum encryption standards proposed along with this one, so there is some hope that they will be useful for now at least.

If you would like to read more about quantum computing, encryption as well as Richelot isogenies and abelian surfaces then read on at The Register.

Read more:

No One Gets Quantum Computing, Least Of All America's National Institute of Standards and Technology - PC Perspective

Posted in Quantum Computing | Comments Off on No One Gets Quantum Computing, Least Of All America’s National Institute of Standards and Technology – PC Perspective

One of the biggest names in quantum computing could have just cracked open the multibillion-dollar market with a new breakthrough – Fortune

Posted: at 7:47 pm

Quantinuum, the quantum computing company spun out from Honeywell, said this week that it had made a breakthrough in the technology that should help accelerate commercial adoption of quantum computers.

It has to do with real-time correction of errors.

One of the biggest issues with using quantum computers for any practical purpose is that the circuits in a quantum computer are highly susceptible to all kinds of electromagnetic interference, which causes errors in its calculations. These calculation errors must be corrected, either by using software, often after a calculation has run, or by using other physical parts of the quantum circuitry to check for and correct the errors in real time. So far, while scientists have theorized ways for doing this kind of real-time error correction, few of the methods had been demonstrated in practice on a real quantum computer.

The theoretically game-changing potential of quantum computers stems from their ability to harness the strange properties of quantum mechanics. These machines may also speed up the time it takes to run some calculations that can be done today on supercomputers, but which take hours or days. In order to achieve those results, though, ironing out the calculation errors is of utmost importance. In 2019, Google demonstrated that a quantum computer could perform one esoteric calculation in 200 seconds that it estimated would have taken a traditional supercomputer more than 10,000 years to compute. In the future, scientists think quantum computers will help make the production of fertilizer much more efficient and sustainable as well as create new kinds of space-age materials.

Thats why it could be such a big deal that Quantinuum just said it has demonstrated two methods for doing real-time error correction of the calculations a quantum computer runs.

Tony Uttley, Quantinuums chief operations officer, says the error-correction demonstration is an important proof point that the company is on track to being able to deliver a quantum advantage for some real-world commercial applications in the next 18 to 24 months. That means businesses will able to run some calculationspossibly for financial risk or logistics routingsignificantly faster, and perhaps with better results, by using quantum computers for at least part of the calculation than they could by just using standard computer hardware. This lends tremendous credibility to our road map, Uttley said.

Theres a lot of money in Quantinuums road map. This past February, the firms majority shareholder, Honeywell, foresaw revenue in Quantinuums future of $2 billion by 2026. That future could have just drawn nearer.

Uttley says that today, there is a wide disparity in the amount of money different companies, even direct competitors in the same industry, are investing in quantum computing expertise and pilot projects. The reason, he says, is that there are widely varying beliefs in how soon quantum computers will be able to run key business processes faster or better than existing methods on standard computers. Some people think it will happen in the next two years. Others think these nascent machines will only start to realize their business potential a decade from now. Uttley says he hopes this weeks error-correction breakthrough will help tip more of Quantinuums potential customers into the two-year camp.

A $2 billion market opportunity

Honeywells projection of at least $2 billion in revenue from quantum computing by 2026 was a revisiona year earlier than it had previously forecast. The error-correction breakthrough ought to give Honeywell more confidence in that projection.Quantinuum is one of the most prominent players in the emerging quantum computer industry, with Honeywell having made a bold and so far successful bet on one particular way of creating a quantum computer. That method is based on using powerful electromagnets to trap and manipulate ions. Others, such as IBM , Google, and Rigetti Computing, have created quantum computers using superconducting materials. Microsoft has been trying to create a variation of this superconducting-based quantum computer but using a slightly different technology that would be less prone to errors. Still others are creating quantum computers using lasers and photons. And some companies, such as Intel, have been working on quantum computers where the circuits are built using more conventional semiconductors.

The ability to perform real-time error correction could be a big advantage for Quantinuum and its trapped-ionbased quantum computers as it competes for a commercial edge over competing quantum computer companies. But Uttley points out that besides selling access to its own trapped-ion quantum computers through the cloud, Quantinuum also helps customers run algorithms on IBMs superconducting quantum computers. (IBM is also an investor in Quantinuum.)

Different kinds of algorithms and calculations may be better suited to one kind of quantum computer over another. Trapped ions tend to remain in a quantum state for relatively long periods of timewith the record being an hour. Superconducting circuits, on the other hand, tend to stay in a quantum state for a millisecond or less. But this also means that it takes much longer for a trapped-ion quantum computer to run a calculation than for a superconducting one, Uttley says. He envisions a future of hybrid computing where different parts of an algorithm are run on different machines in the cloudpartially on a traditional computer, partly on a trapped-ion quantum computer, and partly on a superconducting quantum computer.

In a standard computer, information is represented in a binary form, either a 0 or a 1, called a bit. Quantum computers use the principles of quantum mechanics to form their circuits, with each unit of the circuit called a qubit. Qubits can represent both 0 and 1 simultaneously. This means that each additional qubit involved in performing calculations doubles the power of a quantum computer. This doubling of power for every additional qubit is one reason that quantum computers will, in theory, be far more powerful than even todays largest supercomputers. But this is only true if the issue of error-correction can be successfully tackled and if scientists can figure out how to successfully link enough qubits together to exceed the power of existing standard high-performance computing clusters.

Quantinuum demonstrated two different error-correction methodsone called the five-qubit code and the other called the Steane code. Both methods use multiple physical qubits to represent one logical part of the circuit, with some of those qubits actually performing the calculation and the others checking and correcting errors in the calculation. As the name suggests, the five-qubit code uses five qubits, while the Steane code uses seven qubits. Uttley says that Quantinuum discovered that the Steane code worked significantly better than the five-qubit code.

That may mean it will become the dominant form of error correction, at least for trapped-ion quantum computers, going forward.

Sign up for theFortune Features email list so you dont miss our biggest features, exclusive interviews, and investigations.

Read more here:

One of the biggest names in quantum computing could have just cracked open the multibillion-dollar market with a new breakthrough - Fortune

Posted in Quantum Computing | Comments Off on One of the biggest names in quantum computing could have just cracked open the multibillion-dollar market with a new breakthrough – Fortune

D-Wave and DPCM Complete Their Business Combination – Quantum Computing Report

Posted: at 7:47 pm

D-Wave and DPCM Complete Their Business Combination

The companies announced that their SPAC merger has been approved and that D-Wave will become a public company and will be listed on the New York Stock Exchange (NYSE) under the ticker symbols QBTS for the common stock and QBTS WS for the warrants. Members of the companys management will ring the opening bell of the NYSE when trading starts on Monday, August 8. The transaction was first announced in February of this year and a shareholder vote to approve it occurred earlier this week. Shareholders of DPCM Capitals Class A Common Stock had the right to redeem their shares for pro rata portion of the funds in the companys trust account. The shareholders elected to redeem about 29 million of these shares out of the 37.5 million total requiring a total payment of $291 million for the redemptions. So those funds will not be available to D-Wave for working capital. Additional information about the completion of this business combination is available in a press release that can be seen here and also the Form 8-K the companies have filed with the Securities and Exchange Commission (SEC) here.

August 5, 2022

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Go here to read the rest:

D-Wave and DPCM Complete Their Business Combination - Quantum Computing Report

Posted in Quantum Computing | Comments Off on D-Wave and DPCM Complete Their Business Combination – Quantum Computing Report

Explosive growth of faculty, courses and research signal new era for Computer Science at Yale – Yale University

Posted: at 7:47 pm

With numerous new courses, new faculty members, and a wider range of research fields, Computer Science (CS) at Yale is better positioned than ever to take on emerging challenges, and to meet the needs of students, interdisciplinary research on campus, and industry.

The CS department has recently hired nine tenure track faculty members and four teaching track lecturers to its ranks. These hires are in addition to an earlier round of 11 new tenure track faculty members and two lecturers hired in the last few years. The boost in hiring accomplishes a number of long-term goals, including expanding the department's areas of expertise. Also, as Computer Science has emerged as the second-most popular major (just behind economics) at Yale, it will go a long way toward meeting students' curriculum needs.

"Our new faculty members were chosen for the excellence of their research, as well as for their fields that they represent, all of which have been in high demand by both our students and faculty on campus as well as the industry," said Zhong Shao, the Thomas L. Kempner Professor of Computer Science and department chair. "The range of their expertise addresses some of the most critical challenges that we face today."

SEAS Dean Jeffrey Brock said the new faculty will be critical to realizing the ambitious goals set out in SEAS' Strategic Vision, particularly in the areas of artificial intelligence and robotics, while building in key areas like cybersecurity and distributed computing.

"This exciting cohort of new faculty stands to transform our CS department," Brock said. "During our recruiting season, they sensed Yale's momentum in CS and in engineering, ultimately turning down excellent offers at other top schools to join our faculty. Their presence will allow Yale CS to expand their course offerings, as well as to establish critical mass in core and cutting-edge research areas."

Many of the new faculty members, like Fan Zhang, cited the department's "fast growth in recent years." Others said that they were drawn by the collaborative environment at Yale, especially considering that Yale is ranked at or near the top in numerous research areas. Daniel Rakita, for instance, said he's looking forward to working with the Yale Medical School to see how his lab's robotics research can assist in hospital or home care settings, as well as working with the Wu Tsai Institute on Brain-Machine Interface technologies.

"Many people I spoke with indicated that there are no boundaries between departments at Yale, and interdisciplinary research is not just encouraged here, but is a 'way of life,'" Rakita said. Many of the new faculty have already engaged with key academic leaders around the campus, from medicine, to economics, to quantum computing.

As part of this boost in hiring, the department strategically targeted certain research areas, including artificial intelligence, trustworthy computing, robotics, quantum computing, and modeling.

The nine new tenure-track faculty hires, and their areas of research are below.

[We spoke to these new faculty members about their research, their motivations, potential collaborations, and much more. Click here to learn more about each of our latest faculty]

The four new teaching-track lecturer hires, and their areas of research are:

This hiring season marks the first since the changes in structure that made SEAS more independent, granting more faculty lines for growth.

"Our independence and ability to be opportunistic were key elements in our ability to realize this transformational growth of Computer Science at Yale," Brock said. "As CS plays such a critical role in an increasingly broad range of disciplines, the size and breadth of CS is critical to our strategy for SEAS. I'm thrilled to be able to take the first step in realizing that vision for a SEAS that is well integrated within its host University and aligned with its mission."

SEAS became independent from the Faculty of Arts and Sciences in July of 2022.

A curriculum to meet the needs of students and industry

Increasing the department's curriculum has also been in the planning stages for a while, a goal made possible by the recent hires of new faculty and lecturers. Shao said there was a concerted effort to meet the high demand in areas such as artificial intelligence, blockchain, machine learning, introductory programming and CS courses for non-majors.

"This has been on the to-do list for the department for many years, but we just didn't have the manpower," Shao said. "And finally, with the new faculty hires, we can actually offer these courses."

Ben Fisch, for instance, will be teaching a new course on blockchains for both graduate students and advanced undergraduates in computer science. Tesca Fitzgerald will introduce a new graduate-level seminar on Interactive Robot Learning. And Katerina Sotiraki will teach classes in theoretical and applied cryptography, at both the undergraduate and graduate level. These are just a few of the new courses that will be available.

Responding to industry needs, the department has also added courses focused on what's known as full stack web programming - that is, the set of skills needed to develop the interface as well as the coding behind building a complete web application. One of the department's most popular courses, on software engineering, will now be offered for both semesters of the year, instead of one. Both, Shao said, are specifically aimed at the needs of industry and students.

"As new challenges emerge, Computer Science at Yale will continue to adapt," Shao said. "We're excited about the future of our department, and these new additions to our faculty and our curriculum are going to be a major part of it."

Read the original:

Explosive growth of faculty, courses and research signal new era for Computer Science at Yale - Yale University

Posted in Quantum Computing | Comments Off on Explosive growth of faculty, courses and research signal new era for Computer Science at Yale – Yale University

CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 – HPCwire

Posted: at 7:47 pm

A new version of a standard backed by major cloud providers and chip companies could change the way some of the worlds largest datacenters and fastest supercomputers are built.

The CXL Consortium on Tuesday announced a new specification called CXL 3.0 also known as Compute Express Link 3.0 that eliminates more chokepoints that slow down computation in enterprise computing and datacenters.

The new spec provides a communication link between chips, memory and storage in systems, and it is two times faster than its predecessor called CXL 2.0.

CXL 3.0 also has improvements for more fine-grained pooling and sharing of computing resources for applications such as artificial intelligence.

CXL 3.0 is all about improving bandwidth and capacity, and can better provision and manage computing, memory and storage resources, said Kurt Lender, the co-chair of the CXL marketing work group (and senior ecosystem manager at Intel), in an interview with HPCwire.

Hardware and cloud providers are coalescing around CXL, which has steamrolled other competing interconnects. This week, OpenCAPI, an IBM-backed interconnect standard, merged with CXL Consortium, following the footsteps of Gen-Z, which did the same in 2020.

CXL released the first CXL 1.0 specification in 2019, and quickly followed it up with CXL 2.0, which supported PCIe 5.0, which is found in a handful of chips such as Intels Sapphire Rapids and Nvidias Hopper GPU.

The CXL 3.0 spec is based on PCIe 6.0, which was finalized in January. CXL has a data transfer speed of up to 64 gigatransfers per second, which is the same as PCIe 6.0.

The CXL interconnect can link up chips, storage and memory that are near and far from each other, and that allows system providers to build datacenters as one giant system, said Nathan Brookwood, principal analyst at Insight 64.

CXLs ability to support the expansion of memory, storage and processing in a disaggregated infrastructure gives the protocol a step-up over rival standards, Brookwood said.

Datacenter infrastructures are moving to a decoupled structure to meet the growing processing and bandwidth needs for AI and graphics applications, which require large pools of memory and storage. AI and scientific computing systems also require processors beyond just CPUs, and organizations are installing AI boxes, and in some cases, quantum computers, for more horsepower.

CXL 3.0 improves bandwidth and capacity with better switching and fabric technologies, CXL Consortiums Lender said.

CXL 1.1 was sort of in the node, then with 2.0, you can expand a little bit more into the datacenter. And now you can actually go across racks, you can do decomposable or composable systems, with the fabric technology that weve brought with CXL 3.0, Lender said.

At the rack level, one can make CPU or memory drawers as separate systems, and improvements in CXL 3.0 provide more flexibility and options in switching resources compared to previous CXL specifications.

Typically, servers have a CPU, memory and I/O, and can be limited in physical expansion. In disaggregated infrastructure, one can take a cable to a separate memory tray through a CXL protocol without relying on the popular DDR bus.

You can decompose or compose your datacenter as you like it. You have the capability of moving resources from one node to another, and dont have to do as much overprovisioning as we do today, especially with memory, Lender said, adding its a matter of you can grow systems and sort of interconnect them now through this fabric and through CXL.

The CXL 3.0 protocol uses the electricals of the PCI-Express 6.0 protocol, along with its protocols for I/O and memory. Some improvements include support for new processors and endpoints that can take advantage of the new bandwidth. CXL 2.0 had single-level switching, while 3.0 has multi-level switching, which provides more latency on the fabric.

You can actually start looking at memory like storage you could have hot memory and cold memory, and so on. You can have different tiering and applications can take advantage of that, Lender said.

The protocol also accounts for the ever-changing infrastructure of datacenters, providing more flexibility on how system administrators want to aggregate and disaggregate processing units, memory and storage. The new protocol opens more channels and resources for new types of chips that include SmartNICs, FPGAs and IPUs that may require access to more memory and storage resources in datacenters.

HPC composable systems youre not bound by a box. HPC loves clusters today. And [with CXL 3.0] now you can do coherent clusters and low latency. The growth and flexibility of those nodes is expanding rapidly, Lender said.

The CXL 3.0 protocol can support up to 4,096 nodes, and has a new concept of memory sharing between different nodes. That is an improvement from a static setup in older CXL protocols, where memory could be sliced and attached to different hosts, but could not be shared once allocated.

Now we have sharing where multiple hosts can actually share a segment of memory. Now you can actually look at quick, efficient data movement between hosts if necessary, or if you have an AI-type application that you want to hand data from one CPU or one host to another, Lender said.

The new feature allows peer-to-peer connection between nodes and endpoints in a single domain. That sets up a wall in which traffic can be isolated to move only between nodes connected to each other. That allows for faster accelerator-to-accelerator or device-to-device data transfer, which is key in building out a coherent system.

If you think about some of the applications and then some of the GPUs and different accelerators, they want to pass information quickly, and now they have to go through the CPU. With CXL 3.0, they dont have to go through the CPU this way, but the CPU is coherent, aware of whats going on, Lender said.

The pooling and allocation of memory resources is managed by a software called Fabric Manager. The software can sit anywhere in the system or hosts to control and allocate memory, but it could ultimately impact software developers.

If you get to the tiering level, and when you start getting all the different latencies in the switching, thats where there will have to be some application awareness and tuning of application. I think we certainly have that capability today, Lender said.

It could be two to four years before companies start releasing CXL 3.0 products, and the CPUs will need to be aware of CXL 3.0, Lender said. Intel built in support for CXL 1.1 in its Sapphire Rapids chip, which is expected to start shipping in volume later this year. The CXL 3.0 protocol is backward compatible with the older versions of the interconnect standard.

CXL products based on earlier protocols are slowly trickling into the market. SK Hynix this week introduced its first DDR5 DRAM-based CXL (Compute Express Link) memory samples, and will start manufacturing CXL memory modules in volume next year. Samsung has also introduced CXL DRAM earlier this year.

While products based on CXL 1.1 and 2.0 protocols are on a two-to-three-year product release cycle, CXL 3.0 products could take a little longer as it takes on a more complex computing environment.

CXL 3.0 could actually be a little slower because of some of the Fabric Manager, the software work. Theyre not simple systems when you start getting into fabrics, people are going to want to do proof of concepts and prove out the technology first. Its going to probably be a three-to-four year timeframe, Lender said.

Some companies already started work on CXL 3.0 verification IP six to nine months ago, and are finetuning the tools to the final specification, Bender said.

The CXL has a board meeting in October to discuss the next steps, which could also involve CXL 4.0. The standards organization for PCIe, called the PCI-Special Interest Group, last month announced it was planning PCIe 7.0, which increases the data transfer speed to 128 gigatransfers per second, which is double that of PCIe 6.0.

Lender was cautious about how PCIe 7.0 could potentially fit into a next-generation CXL 4.0. CXL has its own set of I/O, memory and cache protocols.

CXL sits on the electricals of PCIe so I cant commit or absolutely guarantee that [CXL 4.0] will run on 7.0. But thats the intent to use the electricals, Lender said.

Under that case, one of the tenets of CXL 4.0 will be to double the bandwidth by going to PCIe 7.0, but beyond that, everything else will be what we do more fabric or do different tunings, Lender said.

CXL has been on an accelerated pace, with three specification releases since its formation in 2019. There was confusion in the industry on the best high-speed, coherent I/O bus, but the focus has now coagulated around CXL.

Now we have the fabric. There are pieces of Gen-Z and OpenCAPI that arent even in CXL 3.0, so will we incorporate those? Sure, well look at doing that kind of work moving forward, Lender said.

Continued here:

CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 - HPCwire

Posted in Quantum Computing | Comments Off on CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 – HPCwire

IonQ to Participate in Third Annual North American Conference on Trapped Ions – HPCwire

Posted: August 2, 2022 at 2:44 pm

COLLEGE PARK, Md., Aug. 2, 2022 IonQ, an industry leader in quantum computing, today announced its participation in the third annual North American Conference on Trapped Ions (NACTI). The event will take place at Duke University on August 1-4, 2022, and brings together dozens of the worlds leading quantum scientists and researchers to discuss the latest advancements in the field of quantum.

Participating for the third time at this event, IonQ co-founder and CTO Jungsang Kim will speak on the latest IonQ Aria performance updates, IonQ Forte gate results, and the importance of an industry-wide benchmarks based on a collection of real-world algorithms such as algorithmic qubits (#AQ) that can better represent any quantum computers performance and utility.

Other topics on the agenda for NACTI include: quantum scaling and architectures, including networking; fabrication and development of new traps; increasing accessibility; control hardware and software for trapped ions; new qub(d)its and gates; quantum computing and simulation employing ion trapping techniques; looking beyond atomic ions; precision measurements and clocks; among others.

To learn more about IonQ Aria with details on performance and its technical prowess, click the link here for more information.

About IonQ

IonQ, Inc. is a leader in quantum computing, with a proven track record of innovation and deployment. IonQs current generation quantum computer, IonQ Forte, is the latest in a line of cutting-edge systems, including IonQ Aria, a system that boasts industry-leading 20 algorithmic qubits. Along with record performance, IonQ has defined what it believes is the best path forward to scale. IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research. To learn more, visit http://www.ionq.com.

Source: IonQ

Go here to see the original:

IonQ to Participate in Third Annual North American Conference on Trapped Ions - HPCwire

Posted in Quantum Computing | Comments Off on IonQ to Participate in Third Annual North American Conference on Trapped Ions – HPCwire

Phasecraft receives two research grants as part of the Commercialising Quantum Technologies Challenge at UK Research and Innovation – PR Web

Posted: at 2:44 pm

"Were excited to be working with world experts on telecommunications networks at BT, and extending our ongoing partnership with Rigetti, to apply quantum algorithms to optimisation problems," says Phasecraft co-founder, Ashley Montanaro.

BRISTOL, England (PRWEB) August 02, 2022

Today Phasecraft, the quantum algorithms company, announced that it has jointly received two research grants from UK Research and Innovation (UKRI) as part of the Commercialising Quantum Technologies Challenge delivered by Innovate UK.

In collaboration with BT and Rigetti, Phasecraft will lead a grant-funded project focused on the development of near-term quantum computing for solving hard optimisation problems and constraint satisfaction problems. Computational problems in an array of fields including network design, electronic design automation, logistics, and scheduling are characterised by needing to find a solution among exponentially many potential solutions. Such problems are, therefore, exceptionally challenging, yet their applications and commercial potential are vast.

Phasecrafts goal is to significantly reduce the timescale for quantum advantage in several critical areas, says Phasecraft co-founder, Ashley Montanaro. Were excited to be working with world experts on telecommunications networks at BT, and extending our ongoing partnership with Rigetti, to apply quantum algorithms to optimisation problems. This project will build on our expertise in key underlying technologies, enabling us to determine whether near-term quantum computing could outperform classical methods in this application domain.

The second grant awarded to Phasecraft supports the development of near-term quantum computing to simulate currently intractable problems in materials modelling for photovoltaics. Leading this project in collaboration with UCL and Oxford PV a leading company pioneering the commercialisation of perovskite photovoltaics this award will enable the development of a modelling capability that is tailored to the real-world needs of the photovoltaics industry.

Phasecraft has already proven that quantum computers have the potential to revolutionise materials modelling, even before fully scalable, fault-tolerant quantum computers become available, says Phasecraft co-founder Toby Cubitt. The results we have obtained for battery materials are hugely encouraging and show how our work can really make the difference in critically important areas. We know that photovoltaics has a crucial role to play in the transition to green energy, and we are hugely excited to be the ones making quantum computing part of the green revolution.

Phasecrafts team brings together many of the worlds leading quantum scientists and engineers, partnering with the worlds leading developers of quantum hardware. The teams research has led to fundamental breakthroughs in quantum science, and Phasecraft is the market leader in quantum IP.

To learn more about our scientific research, business partnerships, career opportunities, and fellowships, please visit phasecraft.io.

About Phasecraft

Phasecraft is the quantum algorithms company. Were building the mathematical foundations for quantum computing applications that solve real-world problems.

Our team brings together many of the worlds leading quantum scientists, including founders Toby Cubitt, Ashley Montanaro, and John Morton and quantum consultant Andrew Childs.

Through our partnerships with Google, IBM, and Rigetti we enjoy unprecedented access to todays best quantum computers, which provides us with unique opportunities to develop foundational IP, inform the development of next-generation quantum hardware, and accelerate commercialization of high-value breakthroughs.

We are always looking for talented research scientists and partners interested in joining us on the front lines of quantum computing. To learn more about our scientific research, business partnerships, career opportunities, and fellowships, please visit phasecraft.io.

Share article on social media or email:

Read the original post:

Phasecraft receives two research grants as part of the Commercialising Quantum Technologies Challenge at UK Research and Innovation - PR Web

Posted in Quantum Computing | Comments Off on Phasecraft receives two research grants as part of the Commercialising Quantum Technologies Challenge at UK Research and Innovation – PR Web

The Story of IQIM: Institute for Quantum Information and Matter Caltech Magazine – Caltech

Posted: at 2:44 pm

Then, in 2000, Preskill and Kimble received a grant from the National Science Foundation, which they used to form the Institute for Quantum Information (IQI) that same year.

NSF got a surge of funding for a program they called Information Technology Research, which included a lot of practical things but also sort of a lunatic fringe of blue-sky research. And thats what we were part of, Preskill told AIP. We had an amazing group of young people in the early 2000s who came through, many of whom are leaders of research in quantum information now, like Patrick Hayden, and Guifr Vidal, and Frank Verstraete, and quite a few others.

Vidal (postdoc 0105), now a senior staff research scientist at Google, recalled those early days as a Caltech postdoc during a Heritage Project interview: John had the vision ... to hire interesting young people for [IQI], then apply a hands-off approach. Hes not the type of person who needs to control everything and everyone.

Dave Bacon (BS 97), a former IQI postdoc, remembered IQI as a leading hub for quantum computing research:

John literally started inviting everybody in the field to come visit. It was like all of quantum computing was flowing through that place, and I was in the main place we'd have the group meetings, he said in a Heritage Project interview. It felt like everybody would come in and give a talk right outside my office. It was perfect.

Liang Jiang (BS 04), a former IQI postdoc and current professor at the University of Chicago, told Zierler during a Heritage Project interview that weekly meetings were so full of discussion and questions that Preskill had to impose a time limit: You could only talk for one minute because some group members would get really excited with the results and would talk a lot about their research.

By 2011, advances in quantum computing hardware, such as superconducting circuits and qubits (the quantum mechanical analogue of a classical bit) gave Preskill and Kimble the impetus to apply for more NSF funding as a means to broaden the IQIs scope to include experimental work. They received that funding and, in 2011, changed its name to the Institute for Quantum Information and Matter, for which Preskill serves as the Allen V. C. Davis and Lenabelle Davis Leadership Chair of the Institute for Quantum Science and Technology.

Spiros Michalakis, staff researcher and manager of outreach at IQIM, described this name change in a recent Heritage Project interview as a visionary move, one that is still paying off: We attach Mmatterand it really mattered because we started to have conversations with how you can implement certain things and how you can convert some of the theories into experiments. I didnt know many physicists or many people who were part of physics or even mathematical physics who were not, basically, in one way or another, associated with IQIM. If you look at the roster, even now, for the second iteration of IQIM, the second cycle we have, theres a pretty cool medley of people.

As a sign of quantum computings progression at Caltech and beyond, the Institute partnered with Amazon to build the AWS Center for Quantum Computing, which opened on campus last year. The goal of the collaboration is to create quantum computers and related technologies that have the potential to revolutionize data security, machine learning, medicine development, sustainability practices, and more.

It is wonderful to see many of the graduate students and postdocs from the early days of IQIM come back to campus as senior research scientists at the AWS Center for Quantum Computing, Michalakis says. IQIM brought together theorists and experimentalists with a vision toward a transformative future for all. Amazingly, we are reaping the benefits of that vision already, as the era of quantum information science and engineering unfolds before our eyes at an unprecedented pace. What an exciting time to be alive.

Read this article:

The Story of IQIM: Institute for Quantum Information and Matter Caltech Magazine - Caltech

Posted in Quantum Computing | Comments Off on The Story of IQIM: Institute for Quantum Information and Matter Caltech Magazine – Caltech

Page 32«..1020..31323334..4050..»