Cosmic Leap: NASA Swift Satellite and AI Unravel the Distance of the Farthest Gamma-Ray Bursts – UNLV NewsCenter

The advent of AI has been hailed by many as a societal game-changer, as it opens a universe of possibilities to improve nearly every aspect of our lives.

Astronomers are now using AI, quite literally, to measure the expansion of our universe.

Two recent studies led by Maria Dainotti, a visiting professor with UNLVs Nevada Center for Astrophysics and assistant professor at the National Astronomical Observatory of Japan (NAOJ), incorporated multiple machine learning models to add a new level of precision to distance measurements for gamma-ray bursts (GRBs) the most luminous and violent explosions in the universe.

In just a few seconds, GRBs release the same amount of energy our sun releases in its entire lifetime. Because they are so bright, GRBs can be observed at multiple distances including at the edge of the visible universe and aid astronomers in their quest to chase the oldest and most distant stars. But, due to the limits of current technology, only a small percentage of known GRBs have all of the observational characteristics needed to aid astronomers in calculating how far away they occurred.

Dainotti and her teams combined GRB data from NASAs Neil Gehrels Swift Observatory with multiple machine learning models to overcome the limitations of current observational technology and, more precisely, estimate the proximity of GRBs for which the distance is unknown. Because GRBs can be observed both far away and at relatively close distances, knowing where they occurred can help scientists understand how stars evolve over time and how many GRBs can occur in a given space and time.

This research pushes forward the frontier in both gamma-ray astronomy and machine learning, said Dainotti. Follow-up research and innovation will help us achieve even more reliable results and enable us to answer some of the most pressing cosmological questions, including the earliest processes of our universe and how it has evolved over time.

In one study, Dainotti and Aditya Narendra, a final-year doctoral student at Polands Jagiellonian University, used several machine learning methods to precisely measure the distance of GRBs observed by the space Swift UltraViolet/Optical Telescope (UVOT) and ground-based telescopes, including the Subaru Telescope. The measurements were based solely on other, non distance-related GRB properties. The research was published May 23 in the Astrophysical Journal Letters.

The outcome of this study is so precise that we can determine using predicted distance the number of GRBs in a given volume and time (called the rate), which is very close to the actual observed estimates, said Narendra.

Another study led by Dainotti and international collaborators has been successful in measuring GRB distance with machine learning using data from NASAs Swift X-ray Telescope (XRT) afterglows from what are known as long GRBs. GRBs are believed to occur in different ways. Long GRBs happen when a massive star reaches the end of its life and explodes in a spectacular supernova. Another type, known as short GRBs, happens when the remnants of dead stars, such as neutron stars, merge gravitationally and collide with each other.

Dainotti says the novelty of this approach comes from using several machine-learning methods together to improve their collective predictive power. This method, called Superlearner, assigns each algorithm a weight whose values range from 0 to 1, with each weight corresponding to the predictive power of that singular method.

The advantage of the Superlearner is that the final prediction is always more performant than the singular models, said Dainotti. Superlearner is also used to discard the algorithms which are the least predictive.

This study, which was published Feb. 26 in The Astrophysical Journal, Supplement Series, reliably estimates the distance of 154 long GRBs for which the distance is unknown and significantly boosts the population of known distances among this type of burst.

A third study, published Feb. 21 in the Astrophysical Journal Letters and led by Stanford University astrophysicist Vah Petrosian and Dainotti, used Swift X-ray data to answer puzzling questions by showing that the GRB rate at least at small relative distances does not follow the rate of star formation.

This opens the possibility that long GRBs at small distances may be generated not by a collapse of massive stars but rather by the fusion of very dense objects like neutron stars, said Petrosian.

With support from NASAs Swift Observatory Guest Investigator program (Cycle 19), Dainotti and her colleagues are now working to make the machine learning tools publicly available through an interactive web application.

Go here to read the rest:
Cosmic Leap: NASA Swift Satellite and AI Unravel the Distance of the Farthest Gamma-Ray Bursts - UNLV NewsCenter

How Machine Learning Revolutionizes Automation Security with AI-Powered Defense – Automation.com

Summary

Machine learning is sometimes considered a subset of overarching AI. But in the context of digital security, it may be better understood as a driving force, the fuel powering the engine.

The terms AI and machine learning are often used interchangeably by professionals outside the technology, managed IT and cybersecurity trades. But, truth be told, they are separate and distinct tools that can be coupled to power digital defense systems and frustrate hackers.

Artificial iIntelligence has emerged as an almost ubiquitous part of modern life. We experience its presence in everyday household robots and the familiar Alexa voice that always seems to be listening. Practical uses of AI mimic and take human behavior one step further. In cybersecurity, it can deliver 24/7 monitoring, eliminating the need for a weary flesh-and-blood guardian to stand a post.

Machine learning is sometimes considered a subset of overarching AI. But in the context of digital security, it may be better understood as a driving force, the fuel powering the engine. Using programmable algorithms, it recognizes sometimes subtle patterns. This proves useful when deployed to follow the way employees and other legitimate network users navigate systems. Although even discussions regarding AI and machine learning feel redundant, to some degree, they are a powerful one-two punch in terms of automating security decisions.

Integrating AI calls for a comprehensive understanding of mathematics, logical reasoning, cognitive sciencesand a working knowledge of business networks. The professionals who implement AI for security purposes must also possess high-level expertise and protection planning skills. Used as a problem-solving tool, AI can provide real-time alerts and take pre-programmed actions. But it cannot effectively stem the tide of bad actors without support. Enter machine learning.

In this context, machine learning emphasizes software solutions driven by data analysis. Unlike human information processing limitations, machine learning can handle massive swaths of data. What machine learning learns, for lack of a better word, translates into actionable security intel for the overarching AI umbrella.

Some people think about machine learning as a subcategory of AI, which it is. Others comprehend it in a functional way,i.e., two sides to the same coin. But for cybersecurity experts determined to deter, detectand repel threat actors, machine learning is the gasoline that powers AI engines.

Its now essential to leverage machine learning capabilities to develop a so-called intelligent computer that can defend itself, to some degree. Although the relationship between AI and machine learning is diverse and complex, an expert can integrate them into a cybersecurity posture with relative ease. Its simply a matter of repetition and the following steps.

When properly orchestrated and refined to detect user patterns and subtle anomalies, the AI-machine learning relationship helps cybersecurity professionals keep valuable and sensitive digital assets away from prying eyes and greedy digital hands.

First and foremost, its crucial to put AI and machine learning benefits in context. Studies consistently conclude that more than 80% of all cybersecurity failures are caused by human error. Using automated technologies removes many mistake-prone employees and other network users from the equation. Along with minimizing risk, these are benefits of onboarding these automated next-generation technologies.

Improved cybersecurity efficiency. According to the 2023 Global Security Operations Center Study, cybersecurity professionals spend one-third of their workday chasing downfalse positives. This waste of time negatively impacts their ability to respond to legitimate threats, leaving a business at higher than necessary risk. The strategic application of AI and machine learning can be deployed to recognize harmless anomalies and alert a CISO or vCISO only when authentic threats are present.

Increased threat hunting capabilities.Without proactive, automated security measures like MDR (managed detection and response), organizations are too often following an outdated break-and-fix model. Hackers breach systems or deposit malware, and then the IT department spends the remainder of their day, or week, trying to purge the threat and repair the damage. Cybersecurity experts have widely adopted the philosophy that the best defense is a good offense. A thoughtful AI-machine learning strategy can engage in threat hunting without ever needing a coffee break.

Cure business network vulnerabilities.Vulnerability management approaches generally employ technologies that provide proactive automation. They close cybersecurity gaps and cure inherent vulnerabilities by identifying these weaknesses and alerting human decision-makers. Unlike scheduling a routine annual risk assessment, these cutting-edge technologies deliver ongoing analytics and constant vigilance.

Resolve cybersecurity skills gap.Its something of an open secret that there are not enough trained, certified cybersecurity experts to fill corporate positions. Thats one of the reasons why industry leaders tend to outsource managed IT and cybersecurity to third-party firms. Outsourcing helps to onboard the high-level knowledge and skills required to protect valuable digital assets and sensitive information. Without enough cybersecurity experts to safeguard businesses, automation allows the resources available to companies to drill down and identify true threats. Without these advanced technologies being used to bolster network security, its likely the number of debilitating cyberattacks would grow exponentially.

The type of predictive analytics and swift decision-making capabilities this two-prong approach delivers has seemingly endless industry applications. Banking and financial sector organizations can not only use AI and machine learning to repel hackers but also ferret out fraud. Healthcare organizations have a unique opportunity to exceed Health Insurance Portability and Accountability Act (HIPAA) requirements due to the advanced personal identity record protections it affords. Companies conducting business in the global marketplace can also get a leg-up in meeting the EUs General Data Protection Regulation (GDPR) designed to further informational privacy.

Perhaps the greatest benefit organizations garner from AI and machine learning security automation is the ability to detect, respondand expel threat actors and malicious applications. Managed IT cybersecurity experts can help companies close the skills gap by integrating these and other advanced security strategies.

John Funk is a Creative Consultant at SevenAtoms. A lifelong writer and storyteller, he has a passion for tech and cybersecurity. When hes not found enjoying craft beer or playing Dungeons & Dragons, John can be often found spending time with his cats

Check out our free e-newsletters to read more great articles..

Visit link:
How Machine Learning Revolutionizes Automation Security with AI-Powered Defense - Automation.com

Natively Trained Italian LLM by Fastweb to Leverage AWS GenAI and Machine Learning Capabilities – The Fast Mode

Fastweb has announced it will leverage Amazon Web Services (AWS) generative AI and machine learning services to make its LLM, natively trained in Italian, available to third parties. This builds on Fastwebs work with AWS to help accelerate the digital transformation of Italian businesses and public sector organizations.

Fastweb is constructing a comprehensive Italian language dataset by combining public sources and licensed data from publishers and media outlets. Using this data, Fastweb has fine-tuned the Mistral 7B model using Amazon SageMaker, achieving performance improvements of 20-50% on Italian language benchmarks.

The new models will be made available on Hugging Face, allowing customers to deploy them via Amazon SageMaker. In the future, Fastweb plans to run its model on Amazon Bedrock using Custom Model Import, so it can easily build and scale new generative AI solutions for its customers using a broad set of capabilities available on Amazon Bedrock.

Walter Renna, CEO, Fastweb

Current AI models primarily rely on English data, a nuanced understanding of Italian culture can be harnessed by training on carefully chosen high-quality Italian datasets. This strategic initiative will help propel digital transformation for Italian organizations using technologies at the forefront of innovation. By making these models and applications available not only at a national level but also at a global level through AWSs comprehensive portfolio of generative AI services, were able to more easily build and scale our own generative AI offering, bringing new innovations to market faster.

Fabio Cerone, General Manager, Telco Industry, EMEA, AWS

We are committed to democratizing access of generative AI technology and applications to customers all over the world. The availability of LLMs natively trained on more languages is a critical piece in realizing that mission. Fastwebs effort to create an Italian-language LLM and generative AI is an important step in making the transformative power of generative AI more accessible to Italian businesses and government agencies. Through this work, Fastweb will make it easier for Italian customers to use generative AI to help resolve business challenges more quickly, tackle operational hurdles, and accelerate growth via digital transformation.

Originally posted here:
Natively Trained Italian LLM by Fastweb to Leverage AWS GenAI and Machine Learning Capabilities - The Fast Mode

The 2025 Millionaire’s Club: 3 Quantum Computing Stocks to Buy Now – InvestorPlace

The current rage is about artificial intelligence, but the advancement of the AI field relies on a few key elements including quantum computing

Source: Faces Portrait / Shutterstock.com

If youre on the hunt for quantum computing stocks to buy, youre in the right place. For the past several years, artificial intelligence (AI) has taken the front stage. Notjust in the techfield,but also in the stock market.AI advancement has been tremendous, allowingbusinesses, bothlarge and small,to automate some of their processes.Some of the largest companies have ramped uptheirinvesting in AI teams and divisions, amounting to billions of dollars in additional capital justto keep up with others in the field.

However,there is a newcomer to the field that is independent of AI but will complement it in the future, which is quantum computing. But what is it exactly?Quantum computing uses specialized algorithms and hardware while using quantum mechanics to solve complex problems that typical computerswill takeeithertoo long to solveorcannot solve entirely.Although the world of quantum computing and AI is incredibly complex, we have simplified investing in the field by narrowing it down to 3 companiesthat areat the forefront of the industry, while still being diversified.

Source: Piotr Swat / Shutterstock.com

Nvidia (NASDAQ:NVDA) is an American-based and international leader in the designing and manufacturing of graphics processing units. Although the companys main focus currently is on the AI market, it also has a division that focuses on the quantum computing industry. The stock currently trades at about $924, with a price target of $1,100. This new price target is almost $200 more than the current trading price of a stock, signifying a significant upside potential for Nvidia.

The company accelerates quantum computing centers around the world with its proprietary CUDA-Q platform. The platform also ties quantum computing into AI, allowing the system to solve new and countless problems much faster than before.

The stock currently trades at 36.53x forward earnings. This is about 20% lower in comparison to the stocks own five-year average forward price to earnings (P/E) ratio of 46.14x. Thus, considering what the stock usually trades for, it might be relatively undervalued and at a great point to scoop up some shares.

The company that goes hand in hand with the internet is Alphabet (NASDAQ:GOOG, NASDAQ:GOOGL). The American company first started as a search engine in the late 90s with its main goal of creating the perfect search engine. Fast forward 25 years and you now have a multi-trillion-dollar international company with departments in tech, consumer electronics, data, AI, e-commerce and quantum computing. The companys stock currently trades at about $177 but is on track to rise to an average of $195, with a high of $225 in the next 12 months.

In recent years, it has set out to build the best quantum computing for otherwise impossible problems with the introduction of XPRIZE Quantum Applications and Quantum AI. The program is designed to advance the field of algorithms relating to quantum computing with real-world applications.

As such, the company is in a quickly growing phase, and EPS is forecast to soar from $5.80 last year to over $7.84 by 2025. This makes it a great pick for any investor.

Intel (NYSE:INTC) has specialized in semiconductors since its founding in Mountain View, California in 1968. The company is the worlds largest manufacturer of semiconductors and CPUs and has been since its founding. Intels stock is at about $32 and the average price target is $39.63, with a low of $17 and a high of $68. This would mean an upside potential of almost 24%, on average.

The company has invested heavily in quantum computing in the past several years and is currently putting its expertise to good use, creating hot silicon spin qubits. Qubits are essentially small computing devices that perform differently than typical transistors while also operating at high temperatures.

The company is working diligently on applying the qubits into quantum computing chips that can be used to advance countless fields, while also working with AI systems. All this work is not for nothing. The company is translating this into earnings growth with EPS expected to rise from $1.01 at the end of this year to $1.80 by the end of 2025. As such, this stock should be on any investors watch list.

On the date of publication, Ian Hartana and Vayun Chugh did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Chandler Capital is the work of Ian Hartana and Vayun Chugh. Ian Hartana and Vayun Chugh are both self-taught investors whose work has been featured in Seeking Alpha. Their research primarily revolves around GARP stocks with a long-term investment perspective encompassing diverse sectors such as technology, energy, and healthcare.

See the original post:
The 2025 Millionaire's Club: 3 Quantum Computing Stocks to Buy Now - InvestorPlace

3 Quantum Computing Stocks to Buy Be Millionaire-Makers: May – InvestorPlace

Source: Bartlomiej K. Wroblewski / Shutterstock.com

Dont miss out on this exceptional chance to invest in quantum computing stocks to buy that could be millionaire makers while their valuations remain low. These innovative tech companies are developing cutting-edge quantum computing systems with the potential to generate massive returns for investors who get in early.

The quantum computing stocks featured below are poised to commercialize their technology across multiple industries. Quantum computing promises to transform various sectors of our world, from financial services to medical research. Also, it may enable groundbreaking advances and discoveries that arent possible with traditional classical computing.

The three quantum computing stocks to buy outlined in this article represent the best opportunities investors have to compound their wealth to seven figures. Weve only just started to see the potential of this industry and understand the implications of this new tech.

So, here are three quantum computing stocks for investors who want to earn a potential seven-figure sum.

Source: zakiahza / Shutterstock.com

Hewlett Packard Enterprise (NYSE:HPE) focuses on IT and quantum computing through its Intelligent Edge segment. The company has demonstrated significant achievements in quantum computing research.

HPEs Intelligent Edge segment provides solutions that bring computation closer to the data source. Integrating quantum computing capabilities with Intelligent Edge technologies can offer unique advantages, such as real-time data processing and enhanced decision-making capabilities at the networks edge.

Most recently, the Intelligent Edge segment reported revenue of $902 million, an increase of 9% year-over-year. This segment continues to grow, driven by strong demand for edge computing solutions. The company also achieved an EPS of $0.48, which surpassed the consensus estimate of $0.45. This compares to an EPS of $0.63 in the same quarter of the previous year.

HPE is a well-known brand akin to a more modern version of IBM (NYSE:IBM). It could be a good pick for those who like to stay with the blue-chip options while also having the potential to mint new millionaires.

IonQ (NYSE:IONQ) is a leader in developing trapped-ion quantum computers and making significant strides in the field. The company collaborates with major cloud platforms.

IonQs primary technology involves trapped-ion quantum computers, which utilize ions trapped in electromagnetic fields as qubits. This technology is known for its high-fidelity operations and stability.

Recently, IonQ achieved a milestone of 35 algorithmic qubits with its IonQ Forte system, a year ahead of schedule. This achievement allows the system to handle more sophisticated and more extensive quantum circuits. IonQs growth and technological advancements have been recognized in various industry lists, such as Fast Companys 2023 Next Big Things in Tech List and Deloittes 2023 Technology Fast 500 List.

With a market cap of just 1.79 billion, it remains a small-cap quantum computing stock that could hold significant upside potential for investors. Its developments so far have been promising, and it could prove to be a company that will make early investors rich.

Pure-play quantum computing company Rigetti Computing (NASDAQ:RGTI) is known for its vertically integrated approach. This includes designing and manufacturing quantum processors.

Rigetti has achieved a significant milestone with its 128-qubit chip, which promises to advance quantum computing capabilities and enable new applications. This development is a key part of Rigettis roadmap to scale up quantum systems and improve performance metrics.

Also, in Q1 2024, Rigetti reported a 99.3% median 2-qubit gate fidelity on its 9-qubit Ankaa-class processor. This high level of fidelity is crucial for reliable quantum computations and positions Rigetti well against competitors.

The market cap of RGTI is a fraction of IONQs at just under 200 million at the time of writing. Its progress is similarly impressive, so it could hold significant upside and potentially mint a new generation of millionaires with a large enough investment.

On the date of publication, Matthew Farley did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Matthew started writing coverage of the financial markets during the crypto boom of 2017 and was also a team member of several fintech startups. He then started writing about Australian and U.S. equities for various publications. His work has appeared in MarketBeat, FXStreet, Cryptoslate, Seeking Alpha, and the New Scientist magazine, among others.

Continue reading here:
3 Quantum Computing Stocks to Buy Be Millionaire-Makers: May - InvestorPlace

Amazon taps Finland’s IQM for its first EU quantum computing service – TNW

IQM Garnet, a 20-qubit quantum processing unit (QPU) is now available via Amazon Web Services (AWS) the first quantum computer accessible via AWS cloud in the European Union.

Finnish quantum hardware startup IQM is based outside of Helsinki, Finland. AWS previously has collaborations in place with IonQ, Oxford Quantum Circuits, QuEra, and Rigetti for its quantum cloud service known as Braket, but this will be the first AWS quantum processor hosted within the EU.

This also means that it is the first time Amazons quantum services will be accessible to end users in its AWS Europe (Stockholm) Region. It is also the first time IQMs quantum computers will be available in an on-demand structure via the cloud, and with AWS pay-as-you-go pricing.

We are very honoured to be part of the Amazon network and work together with a global tech company, Jan Goetz, co-CEO and co-founder at IQM told TNW. For IQM, this is a great opportunity to scale our offering globally and collaborate with leading end-users around the world.

Goetz further added that the joint offering was a great step forward for cloud quantum computing, and would enable cloud users to test novel types of algorithms and use-cases to develop their business.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

As most of our readers will probably know, todays noisy and error-prone quantum computers cannot really do all that much yet. However, the technology is currently advancing incredibly fast. Learning to work with it will not happen overnight.

As such a whole business model has sprung up around getting organisations and corporations quantum-ready, so that they wont be caught off guard when quantum utility arrives. Todays smaller qubit systems are also training grounds for software developers, many of whom are working on solving the issue of error correction. In the context of cloud, IQM Garnet is mostly used by quantum algorithm engineers to develop IP around quantum compilers, algorithms, and error correction schemes, Max Haeberlein, Head of Cloud at IQM told TNW. IQM Garnet offers a highly homogenous layout and has cutting-edge fidelities, allowing users to effectively expand algorithms to the full size of the chip.

At the same time, Haeberlein said, the company offers IQM Garnet at affordable rates, which is especially important for the growing quantum algorithm startup scene.

IQM, founded in 2018, is Europes leading quantum hardware developer in superconducting qubits. In the beginning of next year the company plans to add a high-fidelity IQM Radiance 54-qubit quantum computer to its portfolio.

This, according to Haeberlein, will enable users to extend quantum algorithms beyond the point where they can still be classically emulated by supercomputers. In 2026, we release IQM Radiance with 150 qubits, where we will see the first commercial algorithm applications of scale in the domain of finance, automotive, life sciences, and chemicals, he adds.

More here:
Amazon taps Finland's IQM for its first EU quantum computing service - TNW

The power of Quantum Computing – The Cryptonomist

One of the exponential technologies that is not yet getting its fair share of love from the general public and media is Quantum computing. In the past few years, I had the privilege of spending time discussing it with people from CERN and the Fermi Lab, but my conversation with Scott Crowder, Vice President IBM Quantum Adoption and Business Development, had the right mix of theory and real-life examples, which will make anyone understand the potential of this field of research and its business applications. AI will keep its hype for a good while, as we see from its pervasive presence in every corner of the internet. Quantum can be the next big thing. This is our dialogue.

Who are you and what do you do for a living?

My name is Scott Crowder, and I run IBMs Quantum efforts to boost its adoption, together with our partners and industry clients. Our goal is to build a useful Quantum computing infrastructure and to help the world make a Quantum-safe transition, in the next ten years or so. I am an engineer by training and had worked on semi-conductors in the past, before taking on the role of CTO for IBM Systems. With Quantum, its the first time where we have a use first attitude, where we try things with partners, we teach and learn with our clients, before we scale-up projects. Its interesting and its fun.

What are the three killer use cases for Quantum, for what we know now?

Firstly, simulating nature, like materials science new materials, or chemistry, for example better battery chemistry, to mention something that is very hot right now. We do physics simulations or try to understand how complex proteins would behave. These are operations that entail higher computing power than what we could do with todays computers.

Secondly, we try to find patterns out of complex data. For example, a classification of a piece of data as fraud or not. If there is some structure in the data before us, Quantum computing is much better than classical computers to give meaning to it and even pick up things like false positives. This is extremely useful, if we want to make sense of the world.

Lastly, I would say, portfolio optimization, finding efficiencies, and distribution optimization. There are direct and huge applications here, for multiple industries. Think of the mobility or logistics markets, for example. This third use case is slightly farther out from us, in terms of time to market, when compared to the first two.

Where are we really, when it comes to Quantum adoption in the real world?

To simplify it: Quantum is better at doing what it does best, namely simulations. For sure, to do it at scale, larger systems are needed. So, we are looking at 2030 and beyond. What we are doing now is, lets say, algorithmic explorations. We work with a mix of partners: heavy industry conglomerates, banking, pharma, transportation, and startups. And, obviously, universities and research institutions.

Big Tech is also into Quantum, even though the talk of the town is AI. Intel, Microsoft, Google, AWS: all have investments and programs in Quantum, with different approaches to it.

What is the future business model of Quantum? How are you going to sell it?

Its hard to say right now. We must make some assumptions. Its probably going to continue to be, in the medium term, a cloud service, where partners have access to the Quantum capabilities we have built, via API calls, and they can interact with our experts, who help with the prototyping and the training. Basically, its going to be the same as a standard cloud business model. There will be ad hoc projects for sure, where the stakes are high, and we can unlock tremendous economic value. In a way, the approach is more like how we weave CPUs and GPUs into a compute fabric, and not via a single application per se, like a Chat GPT for Quantum.

What would you say is the number one risk associated with Quantum?

Cybersecurity is for sure the number one risk. Future, more powerful Quantum computers will crack at some point the current asymmetric cryptography, which protects public and private information, for example (mobile data, payments, medical records, etc). The math for that already exists. There are Quantum-safe cryptography solutions, but a full ecosystem of security providers and coding will need to change, to account for the Quantum shift, and to make sure we have a Quantum safe era.

Where can we find you and learn more about Quantum?

A simple search for anything related to IBM Quantum will do. I am also active on social media, like LinkedIn. IBM writes a lot of articles on Quantum. We need to talk about it publicly, and have people understand this is real, and it has great potential to bring tremendous value to society and business, across all industries. You may think this is science fiction, as its going to hit us in our next decade, but it is a new way of approaching complex problems. It could help other applications and use cases, as well, like AI, and this is why its the right moment to talk Quantum.

See the original post here:
The power of Quantum Computing - The Cryptonomist

Here Come the Qubits? What You Should Know About the Onset of Quantum Computing – IPWatchdog.com

Nearly 5,000 patents were granted in [quantum computing] in 2022approximately 1% more than 2021. By January 2024, the United States had authorized and issued an aggregate of nearly 16,000 patents in the area of quantum technology (37% of the global total).

While artificial intelligence (AI) may occupy all the limelight from media, stock markets, large and small corporations, not to mention political figures, futurists and modernists know that the mainstreaming of quantum computing will enable the next real technology paradigm shift.

From its beginnings in the speculative musings of physicist Paul Benioff in 1980 to the groundbreaking algorithms of mathematician Peter Shor in 1994, quantum computing was a transformative discovery. However, it was not until Googles establishment of a quantum hardware lab in 2014 that the theoretical promises began to materialize into practical applications. This marked the onset of a new era, where quantum experimentation became increasingly accessible, with IBM democratizing access to prototype processors and Google achieving quantum advantage over classical supercomputers in 2019.

What is quantum computing?

It is a technology for performing computations much faster than classical computing by using quantum-mechanical phenomena. Indeed, quantum computing can theoretically provide exponential performance improvement for some applications and to potentially enable completely new territories of computing. It has applications beyond computing, including communications and sensing.

How does quantum computing work?

While digital computers store and process information using bits, which can be either 0 or 1, quantum computers use qubits (quantum bits) that differ from these traditional bits. A qubit can be either an electron or proton, and unlike traditional bits, can also exist in superposition states, be subjected to incompatible measurements (or interference), and even be entangled with other quantum bits, rendering them much more powerful.

What has delayed the obsolescence of traditional computers and blocked the dominance of quantum computers?

To build a quantum computer or other quantum information technologies, we need to produce quantum objects that can act as qubits and be harnessed and controlled in physical systems. Therein lies the challenge, but scientists are quietly making progress.

While the theoretical potential of quantum computing was identified decades ago, it has only begun to be realized in recent years. An accelerating, high-stakes arms race is afoot in the private and public sectors to build quantum processors and circuits capable of solving exponentially complex problems, and a growing number of working systems are in progress. Quantum computing will likely lead to a paradigm shift as it unlocks advancements in several scientific fields.

What has the government done about it?

The United States adopted the National Quantum Initiative Act in December 2018 for the first time, giving the United States a plan for advancing quantum technology and quantum computing. The National Quantum Initiative, or NQI, provided an umbrella under which government agencies could develop and operate programs for improving the climate for quantum science and technology in the U.S., coordinated by the National Quantum Coordination Office, or NQCO. Agencies include the National Institute of Standards and Technology or NIST, the National Science Foundation or NSF, and the Department of Energy or DOE. These agencies have combined to establish the Quantum Economic Development Consortium, or QED-C, a consortium of industrial, academic, and governmental entities. Five years later, Congress and the President adopted a rare bipartisan bill to reauthorize the NQIA to further accelerate quantum research and development economic and national security of the United States, with needed funding and support.

Most recently, on April 10, 2024, United States Senator Marsha Blackburn (R-TN) and Representative Elise Stefanik (R-NY) introduced the Defense Quantum Acceleration Act, which would, among other provisions, establish a quantum advisor and a new center of excellence. The preeminence of quantum computing technology within national defense initiatives just got strategic. For example, quantum-encrypted information can not be secretly intercepted, because attempting to measure a quantumproperty changes it.Similarly, in the domain of navigation, while global positioning systems or GPS can be spoofed, quantumsensors can securely relay information about location. Quantum computers have the capability of processing information infinitely faster and more complex than traditional computers.

Its still early days, but the quantum realm is heating up and rapidly evolving. While they currently face challenges such as size limitations, maintenance complexities, and error susceptibility compared to classical computers, experts envision a near-term future where quantum computing outperforms classical computing for specific tasks.

What is the potential impact of quantum technology on the U.S. economy?

Digital computers have been pivotal in information processing, but quantum computers offer a paradigm shift. With the capacity to tackle intricate statistical problems beyond current computational boundaries, quantum computing is a game changer. McKinsey projects it to contribute nearly $2.0 trillion in value by 2035. The industries most likely to see the earliest economic impact from quantum computing include automotive, chemicals, financial services, and life sciences.

A McKinsey study published in April 2024 also delves into various facets of the investment landscape within the Quantum Technology (Q.T.) sector:

Technological advancements in quantum computing have accelerated in recent years, enabling solutions to exceedingly complex problems beyond the capabilities of todays most influential classical computers. Such advancements could revolutionize various sectors, such as the chemicals, life sciences, finance and mobility sectors. The industry is poised to revolutionize, with quantum computers presenting new frontiers for personalized medicine, allowing for more accurate diagnostics and targeted treatment options. In life sciences, it could accelerate drug discovery, enable personalized medicine through genomic targeting, and revolutionize pharmaceutical research and development. In financial services, it could optimize portfolio management and risk assessment, potentially creating $622 billion in value.

Agricultural advancements enabled by quantum computing could enhance crop optimization and resource efficiency, addressing food security and climate concerns. In the automotive sector, quantum computing offers avenues for optimizing R&D, supply chain management, and production processes, reducing costs, and enhancing efficiency. Similarly, quantum computing holds promise in revolutionizing chemical catalyst design, facilitating sustainable production processes, and mitigating environmental impacts.

Where is intellectual property being created in quantum technology? Nearly 5,000 patents were granted in the area in 2022, the last period for which data is available, approximately 1% more than 2021. By January 2024, the United States had authorized and issued an aggregate of nearly 16,000 patents in the area of quantum technology (37% of the global total), Japan had over 8,600 (~20%), Germany just over 7,000, China almost at 7,000 with France closely behind. More notable perhaps are the numbers of patent applications filed globally, with the United States and China neck-and-neck at 30,099 and 28,593 as of January 2024. Strangely, and its worth thinking about why, granted patents decreased for the global top 10 players in 2021 and 2022.

The European Union has the highest number and concentration of Q.T. talent, per OECD data through 2021, with 113,000 graduates in QT-relevant fields, with India at 91,000 and China at 64,000 and the United States at 55,000. The number of universities with Q.T. programs increased 8.3% to 195, while those offering masters degrees in Q.T. increased by 10% to 55.

What are the legal considerations implicated by commercial quantum technology?

Despite the endless possibilities, legal considerations are looming with the rise of commercial quantum computing. In order to embrace the potential changes brought by quantum computing, legal experts must grasp its foundational principles, capabilities, and ramifications to maneuver through regulatory landscapes, safeguarding intellectual property rights, and resolving disputes.

Cybersecurity: Data is protected by cryptography and the use of algorithms. With exponentially higher computing power, the beginning of commercial quantum computing will require quantum cryptography that cannot be hacked. From when quantum computing becomes available to hackers until quantum cryptography can achieve ubiquity, how will we keep our networks and data safe from cyber-criminals? Can quantum-resistant cryptography protect against this obvious risk?

Privacy: Commercial enterprises will need to adopt procurement policies and implement security protocols that enable compliance with the General Directive on Privacy Regulation in Europe, the China Data Protection Act, and similar legislation in the United States, such as the California Consumer Privacy Act and its progeny. Companies that form the nucleus of our infrastructure for telecommunications, energy, water, waste, health, banking, and other essential services will need extra protection. The consequences of failure are immeasurable. How will we protect the terabytes of additional personal information that quantum computers can collect, transmit, store, analyze, monetize, and use? Existing regulations do not contemplate the gargantuan amount of personal data that will be collected, and new, sensible policies will need to be contemplated and created before the technology exists.

Competition: In the first, second, and third industrial revolutions, we saw first-movers acquire dominant market positions. The public responded by developing legislation to allow the government to break up private enterprises. How will we protect the marketplace from being dominated by a first mover in commercial quantum computing to ensure that healthy competition continues to exist?

Blockchains and smart contracts: The proliferation of quantum computing capabilities should enable greater use of distributed ledgers or blockchains to automate supply chains and commercial and financial transactions. How will they be enabled and protected? Who will be responsible if they are compromised or lost?

Cloud computing: The cloud will be disrupted. Conventional, slower computers will become obsolete when quantum computers enter the data center. Who will have access to quantum cloud computing, and when? The quantum divide could replace the digital divide.

Artificial intelligence: What will happen if quantum computing enables quantum computers to use A.I. to make decisions about people and their lives? Who will be responsible if the computer makes an error, discriminates on some algorithmic bias (e.g., profiling), or makes decisions against sound public policies?

Legal system:Quantum computing will profoundly disrupt the legal system, as it imports large scale efficiencies and speeds to processes, surpassing the capabilities of human intelligence, including that of the very best lawyers. Eventually, as quantum computing is miniaturized and placed on handheld devices, we approach singularity and a paradigm shift so profound that our entire legal system may be turned on its head.

Quantum computing embodies a future with possibilities akin to the pioneering spirit of space exploration. While classical computers retain prominence for many tasks, quantum computing offers unparalleled potential to tackle complex problems on an unprecedented scale, heralding a new era of innovation and discovery that fills us with hope and optimism. However, to fully capitalize on the potential of this tremendous technology, these kinds of legal concerns must be effectively addressed.

Image Source: Deposit Photos Author: perig76 Image ID: 241846620

Original post:
Here Come the Qubits? What You Should Know About the Onset of Quantum Computing - IPWatchdog.com

Aramco signs agreement with Pasqal to deploy first quantum computer in the Kingdom of Saudi Arabia – Aramco

Aramco, one of the worlds leading integrated energy and chemicals companies, has signed an agreement with Pasqal, a global leader in neutral atom quantum computing, to install the first quantum computer in the Kingdom of Saudi Arabia.

The agreement will see Pasqal install, maintain, and operate a 200-qubit quantum computer, which is scheduled for deployment in the second half of 2025.

Ahmad Al-Khowaiter, Aramco EVP of Technology & Innovation, said: Aramco is delighted to partner with Pasqal to bring cutting-edge, high-performance quantum computing capabilities to the Kingdom. In a rapidly evolving digital landscape, we believe it is crucial to seize opportunities presented by new, impactful technologies and we aim to pioneer the use of quantum computing in the energy sector. Our agreement with Pasqal allows us to harness the expertise of a leading player in this field, as we continue to build state-of-the-art solutions into our business. It is also further evidence of our contribution to the growth of the digital economy in Saudi Arabia.

Georges-Olivier Reymond, Pasqal CEO & Co-founder, said: The era of quantum computing is here. No longer confined to theory, it's transitioning to real-world applications, empowering organisations to solve previously intractable problems at scale. Since launching Pasqal in 2019, we have directed our efforts towards concrete quantum computing algorithms immediately applicable to customer use cases. Through this agreement, we'll be at the forefront of accelerating commercial adoption of this transformative technology in Saudi Arabia. This isn't just any quantum computer; it will be the most powerful tool deployed for industrial usages, unlocking a new era of innovation for businesses and society.

The quantum computer will initially use an approach called analog mode. Within the following year, the system will be upgraded to a more advanced hybrid analog-digital mode, which is more powerful and able to solve even more complex problems.

Pasqal and Aramco intend to leverage the quantum computer to identify new use cases, and have an ambitious vision to establish a powerhouse for quantum research within Saudi Arabia. This would involve leading academic institutions with the aim of fostering breakthroughs in quantum algorithm development a crucial step for unlocking the true potential of quantum computing.

The agreement also accelerates Pasqal's activity in Saudi Arabia, having established an office in the Kingdom in 2023, and follows the signing of a Memorandum of Understanding between the companies in 2022 to collaborate on quantum computing capabilities and applications in the energy sector. In 2023, Aramco's Wa'ed Ventures also participated in Pasqal's Series B fundraising round.

See the rest here:
Aramco signs agreement with Pasqal to deploy first quantum computer in the Kingdom of Saudi Arabia - Aramco

Quix checks off another condition to build universal quantum computer – Bits&Chips

Researchers using Quix Quantums technology have successfully demonstrated the on-chip generation of so-called Greenberger-Horne-Zeilinger (GHZ) states, a critical component for the advancement of photonic quantum computing. The Dutch startup focusing on photonics-based quantum computing hails the result as a breakthrough that validates the companys roadmap towards building a scalable universal quantum computer.

The creation of GHZ states is necessary for photonic quantum computers. In a matter-based quantum computer, qubits are stationary, typically positioned on a specialized chip. By contrast, a photonic quantum computer uses flying qubits of light to process and transmit information. This information is constantly passed from one state to another through a process called quantum teleportation. The GHZ states entanglements across three photonic qubits are the crucial resource enabling the computer to maintain this information.

This milestone demonstrates the capability of photonic quantum computers to generate multi-photon entanglement in a way that advances the roadmap toward large-scale quantum computation. The generation of GHZ states is evidence of the transformative potential of Quix Quantums photonic quantum computing technology, commented CEO Stefan Hengesbach of Quix.

Quix next challenge is now making many of these devices. When comparing one GHZ state to a million GHZ states, think of it as the spark needed to create a blazing fire. The more GHZ states a photonic quantum computer contains, the more powerful it becomes, added Chief Scientist Jelmer Renema.

More here:
Quix checks off another condition to build universal quantum computer - Bits&Chips

How Nvidia co-founder plans to turn Hudson Valley into a tech powerhouse greater than Silicon Valley – New York Post

A co-founder of chip maker Nvidia is bankrolling a futuristic quantum computer system at Rensselaer Polytechnic Institute and wants to turn New Yorks Hudson Valley into a tech powerhouse.

Curtis Priem, 64, donated more than $75 million so that the Albany-area college could obtain the IBM-made computer the first such device on a university campus anywhere in the world, the Wall Street Journal reported.

The former tech executive and RPI alum said his goal is to establish the area around the school, based in Troy, into a hub of talent and business as quantum computing becomes more mainstream in the years ahead.

Weve renamed Hudson Valley as Quantum Valley, Priem told the Journal. Its up to New York whether they want to become Silicon State not just a valley.

The burgeoning technology uses subatomic quantum bits, or qubits, to process data much faster than conventional binary computers. The devices are expected to play a key role in the development of advanced AI systems.

Priem will reportedly fund the whopping $15 million per year required to rent the computer, which is kept in a building that used to be a chapel on RPIs campus.

RPI PresidentMartin Schmidt told the newspaper that the school will begin integrating the device into its curriculum and ensure it is accessible to the student body.

Representatives for IBM and RPI did not immediately return The Posts request for comment.

An electrical engineer by trade, Priem co-founded Nvidia alongside its current CEO Jensen Huang and Chris Malachowsky in 1993. He served as the companys chief technology officer until retiring in 2003.

Priem sold most of his stock in retirement and used the money to start a charitable foundation.

He serves as vice chair of the board at RPI and has reportedly donated hundreds of millions of dollars to the university.

Nvidia has surged in value as various tech firms rely on its computer chips to fuel the race to develop artificial intelligence.

The companys stock has surged 95% to nearly $942 per share since January alone. Nvidias market cap exceeds $2.3 trillion, making it the worlds third-most valuable company behind Microsoft and Apple.

In November 2023, Forbes estimated that Priem would be one of the worlds richest people, with a personal fortune of $70 billion, if he hadnt sold off most of his Nvidia shares.

Go here to see the original:
How Nvidia co-founder plans to turn Hudson Valley into a tech powerhouse greater than Silicon Valley - New York Post

ISC 2024 A Few Quantum Gems and Slides from a Packed QC Agenda – HPCwire

If you were looking for quantum computing content, ISC 2024 was a good place to be last week there were around 20 quantum computing related sessions. QC even earned a slide in Kathy Yelicks opening keynote Beyond Exascale. Many of the quantum sessions (and, of course, others) were video-recorded and ISC has now made them freely accessble.

Not all were recorded. For example what sounded like a tantalizing BOF panel Toward Hardware Agnostic Standards in Hybrid HPC/Quantum Computing featuring Bill Gropp (NCSA, University of Illinois), Philippe Deniel (Commissariat Energie Atomique (CEA)), Mitsuhisa Sato (RIKEN), Travis Humble (ORNL), Venkatesh Kannan (Irelands High Performance Centre), and Kristel Michielsen (Julich Supercomputing Center). Was sorry to miss that.

Regardless, theres a wealth of material online and its worth looking through the ISC 2024 inventory for subjects, speakers, and companies of interest (registration may be required). Compiled below are a few QC soundbites from ISC.

Yelick, vice chancellor for research at the University of California, covered a lot of ground in her keynote examining the tension and opportunities emerging from the clash of traditional FP64 HPC and mixed-precision AI and how the commercial supply line of advanced chips is changing. Quantum computing earned a much smaller slice.

I really just have this one slide about quantum. Theres been some really exciting progress if you have been following this and things like error correction over the last year with really, significant improvements in terms of the ability to build error corrected quantum systems. On the other hand, I would say we dont yet have an integrated circuit kind of transistor model yet, right. Weve got a bunch of transistors, [i.e.] weve got a whole bunch of different kinds of qubits that you can build, [and] theres still some debate [over them].

In fact, the latest one of the latest big error correction results was actually not for the superconducting qubits, which is what a lot of the early startups were in, but for the AMO (atomic, molecular, optical) physics. So this is really looking at the fact that were not yet at a place where we can rely on this for the next generation of computing, which is not to say that we should be ignoring it. Im really interested to see how [quantum computing evolves and] also thinking about how much classical computing were going to need with quantum because thats also going to be a big challenge with quantum. [Its] very exciting, but its not replacing also general purpose kind of computing that we do for science and engineering.

Not sure if thats a glass half-full or half-empty perspective. Actually, many of the remaining sessions tackled the questions she posed, including the best way to implement hyrbid HPC-Quantum system, error correction and error mitigation, and the jostling among competing qubit types.

It was easy to sympathize (sort of) with speakers presenting at the Quantum Computing Status of Technologies session, moderated by Valeria Bartsch of Fraunhofer CFL. The speakers came from companies developing different qubit modalities and, naturally, at least a small portion of their brief talks touted their company technology.

She asked, Heres another [submitted question]. What is the most promising quantum computing technology that your company is not developing yourself? I love that one. And everybody has to answer it now. You can think for a few seconds.

Very broadly speaking neutral atom, trapped ion, and superconducting are perhaps the most advanced qubit modalities currently and each speaker presented a bit of background on their companies technology and progress. Trapped ions boast long coherence times but somewhat slower swicthing speeds. Superconducting qubits are fast, and perhaps easier to scale, but error prone. Neutral atoms also have long coherence times but have so far been mostly used for analog computing though efforts are moving quickly to implement gate-based computing. To Hayes point, Marjorana (topology) qubits would be inherently resistant to error.

Not officially part of the ISC program, Hyperion delivered its mid-year HPC market update online just before the conference. The full HPCwire coverage is here and Hyperion said it planned to put its recorded presentation and slides available on its website. Chief Quantum Analyst Bob Sorensen provided a brief QC snapshot during the update predicting the WW QC market will surpass $1 billion in 2025.

Sorensen noted, So this is a quick chart (above) that just shows the combination of the last four estimates that we made, you can see starting in 2019, all the way up to this 2023 estimate that reaches that $1.5 billion in 2026 I talked about earlier. Now my concern here is always its dangerous to project out too far. So we do tend to limit the forecast to these kinds of short ranges, simply because a nascent sector like quantum, which has so much potential, but at the same time has some significant technical hurdles to overcome [which] means that there can be an inflection point most likely though in the upward direction.

He also pointed out that a new use case, a new breakthrough in modality or algorithms, any kind of significant driver that brings more interest in and performance to quantum kick can significantly change the trajectory here on the upside.

Sorensen said, Just to give you a sense of how these vendors that we spoke to looked at algorithms, we see the big three are still the big three in mod-sim, optimization, and AI with with some interest in cybersecurity aspects, post quantum encryption kinds of research and such as well as Monte Carlo processes taking advantage of quantum stability to generate random number generator, provable random numbers to support the Monte Carlo processing.

Interesting here is that were seeing a lot more other (17%). This is the first time weve seen that. We think it is [not so much] about new algorithms, but perhaps hybrid mod-sim optimized or machine learning that feeds into the optimization process. So we think were seeing more hybrid applications emerging as people take a look at the algorithms and decide what solves the use case that they have in hand, he said.

Satoshi Matsuoka, director of RIKEN Center for Computational Science, provided a quick overview of Fugaku plans for incorporating quantum computing as well as touching on the status of the ABCI-Q project. He, of course, has been instrumental with both systems. Both efforts emphasize creating a hybrid HPC-AI-Quantum infrastructure.

The ABCI-Q infrastructure (slide below) will be a variety of quantum-inspired and actual quantum hardware. Fujitsu will supply the former systems. Currently, quantum computers based on neutral atoms, superconducting qubits, and photonics are planned. Matsuoka noted this is well-funded a few $100 million with much of the work done geared toward industry.

Rollout of the integrated quantum-HPC hybrid infrastructure at Fugaku is aimed at the 2024/25 timeframe. Its also an ambitious effort.

About the Fugaku effort, Matsuoka said, [This] project is funded by a different ministry, in which we have several real quantum computers, IBMs Heron (superconducting QPU), a Quantinuum (trapped ion qubits), and quantum simulators. So real quantum computers and simulators to be coupled with Fugaku.

The objective of the project [is to] come up with a comprehensive software stack, such that when the real quantum computers that are more useful come online, then we can move the entire infrastructure along with any of those with quantum computers along with their successors to be deployed to solve real problems. This will be one of the largest hybrid supercomputers.

The aggressive quantum-HPC integration sounds a lot like what going on in Europe. (See HPCwire coverage, Europes Race towards Quantum-HPC Integration and Quantum Advantage)

The topic of benchmarking also came up during Q&A at one session. A single metric such as the Top500 is generally not preferred. But what then, even now during the so-called NISQ (noisy intermediate-scale quantum) computing era?

One questioner said, Lets say interesting algorithms and problems. Is there anything like, and Im not talking about a top 500 list for quantum computers, like an algorithm where we can compare systems? For example, Shors algorithm. So who did it and what is the best performance or the largest numbers you were able to factorize?

Hayes (Quantinuum) said, So we havent attempted to run Shors algorithm, and interesting implementations of Shors algorithm are going to require fault tolerance to factor a number that a classical computer cant. But you know, that doesnt mean it cant be a nice benchmark to see which company can factor the largest one. I did show some data on the quantum Fourier transform. Thats a primitive in Shors algorithm. I would say that thatd be a great candidate for benchmarking the progress and fault tolerance.

More interesting benchmarks for the NISC era are things like quantum volume, and theres some other ones that can be standardized, and you can make fair comparisons. So we try to do that. You know, theyre not widely or universally adopted, but there are organizations out there trying to standardize them. Its difficult getting everybody marching in the same direction.

Corcoles (IBM) added, I think benchmarking in quantum has an entire community around it, and they have been working on it for more than a decade. I read your question as focusing on application-oriented benchmarks versus system-oriented benchmarks. There are layers of subtlety there as well. If we think about Shors algorithm, for example, there were recent works last year suggesting theres more than one way to run Shors. Depending on the architecture, you might choose one or another way.

An architecture that is faster might choose to run many circuits in parallel that can capture Shors algorithm and then do a couple of processing or architecture that that might might take more time they just want to run one single circuit with high probability measure the right action. You could compare run times, but theres probably going to be differences that add to the uncertainty of what what technology you will use, meaning that there might be a regime of factoring, where you might want to choose one aspect or another, but then your particular physical implement, he said.

Macri (QuEra) said, My point is were not yet at the point where we can really [compare systems]. You know we dont want to compete directly with our technologies. I would say that especially in for what concerns applications we need to adopt a collaborative approach. So for example, there are certain areas where these benchmarks that you mentioned are not really applicable. One of them is a quantum simulation and we have seen really a lot of fantastic results from our technology, as well as from ion traps and superconducting qubits.

It doesnt really make sense really to compare the basic features of the technologies so that, you know, we can a priori, identify what is the specific application the result that you want to achieve. I would say lets focus on advancing the technology we see. We already know that there are certain types of devices that outperform others for specific applications. And then we will, we will decide these perhaps at a later stage. But I agreed for for very complex tasks, such as quantum Fourier transform, or perhaps the Shors algorithm, but I think, to be honest, its still too preliminary [for effective system comparisons].

As noted this was a break-out year for quantum at ISC which has long had quantum sessions but not as many. Europes aggressive funding, procurements, and HPC-quantum integration efforts make it clear it does not intend to be left behind in the quantum computing land rush, with, hopefully, a gold rush to follow.

Stay tuned.

Read the original here:
ISC 2024 A Few Quantum Gems and Slides from a Packed QC Agenda - HPCwire

NIST quantum-resistant algorithms to be published within weeks, top White House advisor says – The Record from Recorded Future News

Update, May 24: Includes correction from NIST about the number of algorithms to be released.

The U.S. National Institute of Standards and Technology (NIST) will release post-quantum cryptographic algorithms in the next few weeks, a senior White House official said on Monday.

Anne Neuberger, the White Houses top cyber advisor, told an audience at the Royal United Services Institute (RUSI) in London that the release of the algorithms was a momentous moment, as they marked a major step in the transition to the next generation of cryptography.

The transition is being made in apprehension of what is called a cryptographically relevant quantum computer (CRQC), a device theoretically capable of breaking the encryption thats at the root of protecting both corporate and national security secrets, said Neuberger. NIST made a preliminary announcement of the algorithms in 2022.

Following publication, a spokesperson for NIST told Recorded Future News it was planning to release three finalized algorithms this summer and not four, as Neuberger had said in London.

Conrad Prince, a former official at GCHQ and now a distinguished fellow at RUSI, told Neuberger that during his previous career there had consistently been a concern about hostile states having the capability to decrypt the plaintext of secure messages, although this capability was consistently estimated at being roughly a decade away and had been for the last 20 years.

Neuberger said the U.S. intelligence communitys estimate is similar, the early 2030s, for when a CRQC would be operational. But the time-frame is relevant, said the White House advisor, because there is national security data that is collected today and even if decrypted eight years from now, can still be damaging.

Britains NCSC has warned that contemporary threat actors could be collecting and storing intelligence data today for decryption at some point in the future.

Given the cost of storing vast amounts of old data for decades, such an attack is only likely to be worthwhile for very high-value information, stated the NCSC. As such, the possibility of a CRQC existing at some point in the next decade is a very relevant threat right now.

Neuberger added: Certainly theres some data thats time sensitive, you know, a ship that looks to be transporting weapons to a sanctioned country, probably in eight years we dont care about that anymore.

Publishing the new NIST algorithms is a protection against adversaries collecting the most sensitive kinds of data today, Neuberger added.

A spokesperson for NIST told Recorded Future News: The plan is to release the algorithms this summer. We dont have anything more specific to offer at this time.

But publishing the algorithms is not the last step in moving to a quantum-resistant computing world. The NCSC has warned it is actually just the second step in what will be a very complicated undertaking.

Even if any one of the algorithms proposed by NIST achieves universal acceptance as something that is unbreakable by a quantum computer, it would not be a simple matter of just swapping those algorithms in for the old-fashioned ones.

Part of the challenge is that most systems that currently depend on public-key cryptography for their security are not necessarily capable of running the resource-heavy software used in post-quantum cryptography.

Ultimately, the security of public key cryptographic systems relies on the mathematical difficulty of factoring very large prime numbers something that traditional computers find exhaustingly difficult.

However, research by American mathematician Peter Shor, published in 1994, proposed an algorithm that could be run on a quantum computer for finding these prime factors with far more ease; potentially undermining some of the key assumptions about what makes public-key cryptography secure.

The good news, according to NCSC, is that while advances in quantum computing are continuing to be made, the machines that exist today are still limited, and suffer from relatively high error rates in each operation they perform, stated the agency.

But the NCSC warned that in the future, it is possible that error rates can be lowered such that a large, general-purpose quantum computer could exist, but it is impossible to predict when this may happen.

Recorded Future

Intelligence Cloud.

No previous article

No new articles

Alexander Martin

is the UK Editor for Recorded Future News. He was previously a technology reporter for Sky News and is also a fellow at the European Cyber Conflict Research Initiative.

See original here:
NIST quantum-resistant algorithms to be published within weeks, top White House advisor says - The Record from Recorded Future News

Alice & Bob’s Cat Qubit Research Published in Nature – HPCwire

PARIS and BOSTON, May 23, 2024 Alice & Bob, a global leader in the race for fault-tolerant quantum computing, today announced the publication of its foundational research in Nature, showcasing significant advancements in cat qubit technology.

The study, Quantum control of a cat-qubit with bit-flip times exceeding ten seconds, realized in collaboration with the QUANTIC Team (Mines Paris PSL, Ecole Normale Suprieure and INRIA), demonstrates an unprecedented improvement in the stability of superconducting qubits, marking a critical milestone towards useful fault-tolerant quantum computing.

The researchers have significantly extended the bit-flip times from milliseconds to tens of secondsthousands of times better than any other superconducting qubit type.

Quantum computers face two types of errors: bit-flips and phase-flips. Cat qubits exponentially reduce bit-flips, which are analogous to classical bit flips in digital computing. As a result, the remaining phase-flips can be addressed more efficiently with simpler error correcting codes.

The researchers used Alice & Bobs Boson 3 chipset for this record-breaking result, which features a cat qubit design named TomCat. TomCat employs an efficient quantum tomography (measurement) protocol that allows for the control of quantum states without the use of a transmon, a common circuit used by many quantum companies, but one of the major sources of bit-flips for cat qubits. This design also minimizes the footprint of the qubit on the chip, removing drivelines, cables, instruments, making this stable qubit scalable. Recently, Alice & Bob made publicly available their new Boson 4 chipset that reaches over 7 minutes of bit-flip lifetime. The results from this Nature Publication can therefore be reproduced by users on Boson 4 over Google Cloud.

Although Alice & Bobs latest Boson chips are getting closer to the company bit-flip protection targets, Alice & Bob plans to further advance their technology. The next iterations will focus on boosting the cat qubit phase-flip time and readout fidelity to reach the requirements of their latest architecture to deliver a 100 logical qubit quantum computer.

Key advances highlighted in the research include:

About Alice & Bob

Alice & Bob is a quantum computing company based in Paris and Boston whose goal is to create the first universal, fault-tolerant quantum computer. Founded in 2020, Alice & Bob has already raised 30 million in funding, hired over 95 employees and demonstrated experimental results surpassing those of technology giants such as Google or IBM. Alice & Bob specializes in cat qubits, a pioneering technology developed by the companys founders and later adopted by Amazon. Demonstrating the power of its cat architecture, Alice & Bob recently showed that it could reduce the hardware requirements for building a useful large-scale quantum computer by up to 200 times compared with competing approaches. Alice & Bob cat qubit is available for anyone to test through cloud access.

Source: Alice & Bob

Read this article:
Alice & Bob's Cat Qubit Research Published in Nature - HPCwire

Glimpse of next-generation internet – Harvard Office of Technology Development

May 20th, 2024

By Anne Manning, Harvard Staff Writer Published in the Harvard Gazette

An up close photo of the diamond silicon vacancy center.

Its one thing to dream up a next-generation quantum internet capable of sending highly complex, hacker-proof information around the world at ultra-fast speeds. Its quite another to physically show its possible.

Thats exactly what Harvard physicists have done, using existing Boston-area telecommunication fiber, in a demonstration of the worlds longest fiber distance between two quantum memory nodes. Think of it as a simple, closed internet carrying a signal encoded not by classical bits like the existing internet, but by perfectly secure, individual particles of light.

The groundbreaking work, published in Nature, was led by Mikhail Lukin, the Joshua and Beth Friedman University Professor in the Department of Physics, in collaboration with Harvard professors Marko Lonar and Hongkun Park, who are all members of the Harvard Quantum Initiative. The Nature work was carried out with researchers at Amazon Web Services.

The Harvard team established the practical makings of the first quantum internet by entangling two quantum memory nodes separated by optical fiber link deployed over a roughly 22-mile loop through Cambridge, Somerville, Watertown, and Boston. The two nodes were located a floor apart in Harvards Laboratory for Integrated Science and Engineering.

Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers.

Mikhail Lukin, the Joshua and Beth Friedman University Professor in the Department of Physics

Quantum memory, analogous to classical computer memory, is an important component of a quantum computing future because it allows for complex network operations and information storage and retrieval. While other quantum networks have been created in the past, the Harvard teams is the longest fiber network between devices that can store, process, and move information.

Each node is a very small quantum computer, made out of a sliver of diamond that has a defect in its atomic structure called a silicon-vacancy center. Inside the diamond, carved structures smaller than a hundredth the width of a human hair enhance the interaction between the silicon-vacancy center and light.

The silicon-vacancy center contains two qubits, or bits of quantum information: one in the form of an electron spin used for communication, and the other in a longer-lived nuclear spin used as a memory qubit to store entanglement, the quantum-mechanical property that allows information to be perfectly correlated across any distance.

(In classical computing, information is stored and transmitted as a series of discrete binary signals, say on/off, that form a kind of decision tree. Quantum computing is more fluid, as information can exist in stages between on and off, and is stored and transferred as shifting patterns of particle movement across two entangled points.)

Map showing path of two-node quantum network through Boston and Cambridge. Credit: Can Knaut via OpenStreetMap

Using silicon-vacancy centers as quantum memory devices for single photons has been a multiyear research program at Harvard. The technology solves a major problem in the theorized quantum internet: signal loss that cant be boosted in traditional ways.

A quantum network cannot use standard optical-fiber signal repeaters because simple copying of quantum information as discrete bits is impossible making the information secure, but also very hard to transport over long distances.

Silicon-vacancy-center-based network nodes can catch, store, and entangle bits of quantum information while correcting for signal loss. After cooling the nodes to close to absolute zero, light is sent through the first node and, by nature of the silicon vacancy centers atomic structure, becomes entangled with it, so able to carry the information.

Since the light is already entangled with the first node, it can transfer this entanglement to the second node, explained first author Can Knaut, a Kenneth C. Griffin Graduate School of Arts and Sciences student in Lukins lab. We call this photon-mediated entanglement.

Over the last several years, the researchers have leased optical fiber from a company in Boston to run their experiments, fitting their demonstration network on top of the existing fiber to indicate that creating a quantum internet with similar network lines would be possible.

Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers, Lukin said.

A two-node quantum network is only the beginning. The researchers are working diligently to extend the performance of their network by adding nodes and experimenting with more networking protocols.

The paper is titled Entanglement of Nanophotonic Quantum Memory Nodes in a Telecom Network. The work was supported by the AWS Center for Quantum Networkings research alliance with the Harvard Quantum Initiative, the National Science Foundation, the Center for Ultracold Atoms (an NSF Physics Frontiers Center), the Center for Quantum Networks (an NSF Engineering Research Center), the Air Force Office of Scientific Research, and other sources.

Harvard Office of Technology Development enabled the strategic alliance between Harvard University and Amazon Web Services (AWS) to advance fundamental research and innovation in quantum networking.

Tags: Alliances, Collaborations, Quantum Physics, Internet, Publication

Press Contact: Kirsten Mabry | (617) 495-4157

See the rest here:
Glimpse of next-generation internet - Harvard Office of Technology Development

Exploring new frontiers with Fujitsu’s quantum computing research and development – Fujitsu

Fujitsu and RIKEN have already successfully developed a 64-qubit superconducting quantum computer at the RIKEN-RQC-Fujitsu Collaboration Center, which was jointly established by the two organizations (*1). Our interviewee, researcher Shingo Tokunaga, is currently participating in a joint research project with RIKEN. He majored in electronic engineering at university and worked on microwave-related research topics. After joining Fujitsu, he worked in a variety of software fields, including network firmware development as well as platform development for communication robots. Currently, he is applying his past experience in the Quantum Hardware Team at the Quantum Laboratory to embark on new challenges.

In what fields do you think quantum computing can be applied to?

ShingoQuantum computing has many potential applications, such as finance and healthcare, but especially in quantum chemistry calculations used in drug development. If we can use it for these calculations, we can realize efficient and high precision simulations in a short period of time. Complex calculations that traditionally take a long time to solve on conventional computers are expected to be solved quickly by quantum computers. One such example of this is finding solutions for combinatorial optimization problems such as molecular structure patterns. The spread of the novel coronavirus has made the development of vaccines and therapeutics urgent, and in such situations where rapid responses are needed, I believe the time will come when quantum computers can be utilized.

Fujitsu is collaborating with world-leading research institutions to advance research and development in all technology areas, from quantum devices to foundational software and applications, with the aim of realizing practical quantum computers. Additionally, we are also advancing the development of hybrid technologies (*2) for quantum computers and high-performance computing technologies, represented by the supercomputer Fugaku, which will be necessary for large-scale calculations until the full practicality of quantum computers is achieved.

What themes are you researching? What are your challenges and goals?

ShingoOne of the achievements of our collaborative research with RIKEN is the construction of a 64-qubit superconducting quantum computer. Superconducting quantum computers operate by manipulating quantum bits on quantum chips cooled to under 20 mK using ultra-low-temperature refrigerators, driving them with microwave signals of around 8 GHz, and reading out the state of the bits. However, since both bit operations and readouts are analog operations, errors are inherent. Our goal is to achieve higher fidelity in the control and readout of quantum bits, providing an environment where quantum algorithms can be executed with high computational accuracy, ultimately solving our customers' challenges.

What role do you play in the team?

ShingoThe Quantum Hardware Team consists of many members responsible for tasks such as designing quantum chips, improving semiconductor manufacturing processes, designing and constructing components inside refrigerators, as well as designing and constructing control devices outside refrigerators. I am responsible for building control devices and controlling quantum bits. While much attention is often given to the development of the main body of quantum computers or quantum chips, by controlling and reading quantum bits with high precision, we can deliver the results of the development team to users, and that's my role.

How do you carry out controlling quantum bits, and in what sequence or process?

ShingoThe first step is the basic evaluation of the quantum chip, followed by calibration for controlling the quantum bits. First, we receive the quantum chip from the manufacturing team and perform performance measurements. To evaluate the chip, it is placed inside the refrigerator, and after closing the cover of the refrigerator, which is multilayered for insulation, the inside is vacuumed and cooling begins. It usually takes about two days to cool from room temperature to 20 mK. In the basic evaluation, we confirm parameters such as the resonance frequency of the quantum bits and coherence time called T1(the time it takes for a qubit to become initialized). Then, we perform calibration for quantum bit operations and readouts. Bit operations and readouts may not always yield the desired results, because there are interactions between the bits. The bit to be controlled may be affected by the neighboring bits, so it is necessary to control based on the overall situation of the bits. Therefore, we investigate why the results did not meet expectations, consult with researchers at RIKEN, and make further efforts to minimize errors.

How do you approach the challenge of insufficient accuracy in bit operations and readouts?

ShingoThere are various approaches we can try, such as improving semiconductor processes, implementing noise reduction measures in control electronics, and changing the method of microwave signal irradiation. Our team conducts studies on the waveform, intensity, phase, and irradiation timing of microwave signals necessary to improve the accuracy of quantum bit control. Initially, we try existing methods described in papers on our quantum chip and then work to improve accuracy further from there.

What other areas do you focus on or innovate in, outside of your main responsibilities? Can you also explain the reasons for this?

ShingoI am actively advancing tasks to contribute to improving the performance of quantum computer hardware further. The performance of the created quantum chip can only be evaluated by cooling it in a refrigerator and conducting measurements. Based on these results, it is important to determine what is needed to improve the performance of quantum computer hardware and provide feedback to the quantum chip design and manufacturing teams.

For Fujitsu, the development of quantum computers marks a first-time challenge. Do you have any concerns?

ShingoI believe that venturing into unknown territories is precisely where the value of a challenge lies, presenting opportunities for new discoveries and growth. Fujitsu is tackling quantum computer research and development by combining various technologies it has cultivated over the years. I aim to address challenges one by one and work towards achieving stable operation. Once stable operation is achieved, I hope to conduct research on new control methods.

What kind of activities you are undertaking to accelerate your research on quantum computers?

ShingoQuantum computing is an unknown field even for myself, so I am advancing development while consulting with researchers at RIKEN, our collaborative research partner. I aim to build a relationship of give and take, so I actively strive to cooperate if there are ways in which I can contribute to RIKEN's research.

What is your outlook for future research?

ShingoUltimately, our goal is to utilize quantum computers to solve societal issues, but quantum computing is still in its early stages of development. I believe that it is the responsibility of our Quantum Hardware Team urgently to provide application development teams with qubits and quantum gates that have many bits and high fidelity. In particular, fidelity improvement in two-qubit gate operations is a challenge in the field of control, and I aim to work on improving it. Additionally, I want to explore the development of a quantum platform that allows customers to maximize their utilization of quantum computers.

We use technology to make peoples lives happier. As a result of this belief, we have created various technologies and contributed to the development of society and our customers. At the Fujitsu Technology Hall located in the Fujitsu Technology Park, you can visit mock-ups of Fujitsu's quantum computers, as well as experience the latest technologies such as AI.

Mock-up of a quantum computer exhibited at the Fujitsu Technology Hall

Read more:
Exploring new frontiers with Fujitsu's quantum computing research and development - Fujitsu

22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create – Livescience.com

The artificial intelligence (AI) revolution is here, and it's already changing our lives in a wide variety of ways. From chatbots to sat-nav, AI has revolutionized the technological space but in doing so, it may be set to take over a wide variety of jobs, particularly those involving labor-intensive manual tasks.

But it's not all bad news: as with most new technologies, the hypothetical advent of artificial general intelligence (AGI) where machines are smarter than humans and can apply what they learn across multiple disciplines could also lead to new roles. So what might the job market of the near future look like, and could your job be at risk?

One of the most mind-numbing and tedious jobs around today, data entry will surely be one of the first roles supplanted by AI. Instead of a human laboring over endless data sets and fiddly forms for hours on end, AI systems will be able to input and manage large amounts of data quickly and seamlessly, hopefully freeing up human workers for much more productive tasks.

You might already have endured robotic calls asking if you have been the victim of an accident that wasnt your fault, or whether youre keen to upgrade your long-distance calling plan but this could be just a taste of things to come. AI services could easily take the work of a whole call center, automatically dialling hundreds, if not thousands of unsuspecting victims to spread the word, whether you like it or not.

On the friendlier side, AI customer service agents are already a common sight on the websites of many major companies. Often in the form of chatbots, these agents offer a first line of support, before deferring to a human where needed. In the not too distant future, though, expect the AI to take over completely, walking customers through their complaints or queries from start to finish.

Restaurant bookings can be a hassle, as overworked staff or maitre ds try to juggle existing reservations with no-shows and chancers who try their arm at getting a last-minute slot. Booking a table will soon be a whole lot easier, however, with an entirely computerized system able to allocate slots and spaces with ease, and even juggle late cancellations or alterations without the need for anyone to lose their spot.

Although image generation has grabbed much of the headlines, AI voice creation has become a growing presence in the entertainment and creative world. Offering potentially unlimited customization options, directors and producers can now create a voice whatever tone, style or accent they require which is then able to say whatever they desire, without the need for costly retakes or ever getting tired.

Text generation has quickly become one of the most-used aspects of AI technology, with copilots and other tools able to quickly generate large amounts of texts based on a simple prompt. Whether youre looking to fill your new website with business-focused copy, or offering more detail on your latest product launch AI text generation provides a quick and easy way to do whatever you need to do.

In a similar vein, many of the leading website builder services today offer a fully AI-powered service, allowing you to create the page of your dreams simply by entering a few prompts. From start-ups to sole traders and all the way to big business, theres no need to fiddle around with templates simply tell the platform what youre after, and a personalized website will be yours to customize or publish in moments.

This one may still sound a bit more like the realm of science fiction, but with cars getting smarter by the year, fully AI-powered driving is not too much of a pipe dream any more. Far from the basic autopilot tools on offer today, the cars of the future may well be able to not just operate independently, but provide their passengers with a fully-curated experience, from air conditioning at just the right level, to your favorite radio station.

Another position that is based around humans taking in huge amounts of data and creating reports, accounting is set for an AI revolution that could see many roles replaced. No need to spend hours collating receipts and entering numbers into a spreadsheet when AI can quickly scan, identify and upload all the information needed, taking the stress out of tax season and answering any queries or questions with ease.

The legal industry is another one that is dominated by large amounts of data and paperwork, and also one that is dominated by role-specific processes and even language. This makes it another prime candidate for AI, which will be able to automate the lengthy data analysis and entry actions undertaken by paralegals and legal assistants today although given the scale or importance of the case involved, it may still be wise to have some kind of human element

Signing in for an appointment or a meeting is another job that many believe can easily be done by AI platforms. Rather than needing to bother or distract a human from their job, simply check in on a display screen, with your visitors badge or meeting confirmation registered in seconds allowing you (and everyone else) to get on with your day.

Similar to AI drivers, autonomous vehicles and robots powered by AI systems could soon be taking the role of delivery people. After scanning the list of destinations for any given day, the vehicle or platform would be able to quickly calculate the most efficient route, ensuring no waiting around all day for your package, as well as being able to instantly flag any issues or missed deliveries.

In a boost to current spell checking tools, it may be that AI systems eventually graduate from suggesting or writing content to helping check it for mistakes. Once trained on a style guide or content guidelines, an AI editor could quickly scan through articles, documents and filings to spot any issues a particularly handy speed boost in highly regulated industries such as banking, insurance or healthcare before flagging possible problems to a human supervisor.

Away from the written word, AI-powered platforms could soon be helping compose the next great pieces of music. Taking inspiration from vast libraries of existing pieces, these futuristic musicians could quickly dream up everything from film soundtracks to radio jingles, once again meaning companies or organizations would no longer need to pay human performers for day-long sessions consisting of multiple takes.

Another area which relies on quickly spotting trends and patterns among huge tranches of data, the statistics field could be quickly swamped by AI platforms. Whether it is at a business level, where companies could look to spot potential growth opportunities or risky situations, all the way down to the sports stats used by commentators and fans alike, AI can quickly come up with the figures needed.

A job that has already declined in importance over the past few years thanks to the emergence and widespread adoption of centralized collaboration tools, the role of project manager is another sure-fire target for AI. Rather than having a designated manager trying to keep tabs on the work being done by a number of disparate teams, an AI-powered central solution could collate all the progress in a single location, allowing everyone to view the latest updates and stay on top of their work.

Were already seeing the beginning of AI taking over the image design and generation space, with animation set to be one of the first fields to feel the effect. As more and more advanced AI programs emerge, creating any kind of customized animation will soon be easier than ever, with production studios able to easily create the movies, TV shows and other media they require.

In a similar vein to the entertainment industry, creating designs for new products, advertising campaigns and more will doubtless soon be another field dominated by AI. With a simple prompt, companies will be able to create the graphics they need, with potentially endless customization options that can be carried out instantly, with no need for back-and-forth with human designers.

Keeping track of potential security risks is another task that could be easily handled by AI, which will be able to continuously monitor multiple data fields and sensors to spot issues or threats before they take hold. Once detected, the systems would hopefully be able to take proactive action to lock down valuable data or company platforms, while alerting human agents and managers to ensure everything remains protected.

Many of us are perfectly comfortable booking and scheduling our vacations independently, but sometimes you want all of the stress of planning taken off your hands. Rather than leaving it to a human agent, AI travel service platforms could gather all of your requirements and come up with a tailored solution or itinerary exactly sculpted to your needs, without endless back and forth, taking all of the hassle out of your vacation planning.

Making assessments on the viability of insurance applications can be a lengthy process, with agents needing to take into consideration a huge number of potential risks and other criteria, often via specific formulae or structures. Rather than a human needing to spend all this time, AI agents could quickly scan through all the information provided, coming up with a decision much faster and more effectively.

One final field that is again dominated by analyzing huge amounts of data, past knowledge, and spotting upcoming trends and actions before they happen, stock trading could also quickly become dominated by AI. AI systems will be able to speedily act to make the best deals for financial firms in the blink of an eye, outpacing and outperforming human traders with ease, and possibly leading to even bigger profits.

First, and perhaps most obviously, will be an increase in roles for people looking to advise businesses exactly what kind of AI they should be utilizing. Simply grabbing as many AI tools and services as possible may have a tremendously destabilizing effect on a business, so having an expert who is able to outline the exact benefits and risks of specific technologies will become increasingly important for companies of all sizes.

In a similar vein, getting the most out of your companys new AI tools will be vital, so having trainers skilled in the right services will be absolutely critical. The ability to suggest to workers at all levels what they can utilize AI for will be incredibly useful for businesses everywhere, walking employees through the various platforms and educating them about any possible ill effects.

With chatbots and virtual agents becoming the main entry point for people encountering AI, knowing just how to communicate with such systems is going to be vital to making the relationship productive. Having experts who know the best way to talk to models such as ChatGPT, especially when it comes to phrasing specific questions or prompts, will be increasingly important as our dependence on AI models increases.

Once were happy with how we communicate with AI models, the next big obstacle might be understanding what keeps it happy or at least, productive. We may soon see experts who, much like human therapists, are engaged with AI models to try and understand what makes them tick including why they might show bias or toxicity in order to make our relationships with them more effective overall.

On the occasion that something does go wrong whether thats a poorly worded corporate email, or an advertising campaign that features an embarrassing slip-up there will be a need for crisis managers who can step in and look to quickly defuse the situation. This may become increasingly important in situations where AI may put sensitive data or even lives at risk, although hopefully such incidents will be rare.

The next step along from a crisis involving AI agents or systems may be lawyers or legal experts who specialize in dealing with non-human creators. The ability to represent a defendant who isnt physically present in a courtroom may become increasingly valuable as the role of AI in everyday life, and the risks it poses, becomes more prevalent especially as business data or personal information gets involved.

With AI set to push the limits of what can be done with analysis and data processing, it may be that some companies looking to adopt new tools are simply not equipped to handle the new technology. Stress testers will be able to evaluate the status of your tech stack and network to make sure that any AI tools your business is set to use dont have the opposite effect and push everything to breaking point.

With content creation becoming an increasingly important role for AI, were likely to see such images, audio and video appearing more frequently in everyday life. But were already seeing backlash against obviously AI-generated content littered with errors, like extra fingers on humans, or nonsense alphabets in advertising. Having a human editor that is able to audit this content and ensure it is accurate, and fit for human consumption, could be a vital new role.

In a similar vein, AI-generated content may also need a human sense-checking it before it hits the public domain. Similar to the work currently being done by proofreaders and editors on human-produced content around the world, making sure that AI documents flow properly and sound legitimate will be another crucial consideration, and should lead to a growth in these sorts of roles.

Finally, despite the efficiency and effectiveness of AI-generated content, there will still always be room for the human touch. Much like we already have authentic artists, or artisans who specialize in handmade goods, it may soon be that we have creators and painters who strive for their work to be authentically human, setting them apart from the AI hordes.

See the rest here:

22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - Livescience.com

Meta AI Head: ChatGPT Will Never Reach Human Intelligence – PYMNTS.com

Metaschief AI scientist thinks large language models will never reach human intelligence.

Yann LeCunasserts that artificial intelligence (AI) large language models (LLMs) such as ChatGPT have alimited grasp on logic, the Financial Times (FT) reported Wednesday (May 21).

These models, LeCun told the FT, do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan...hierarchically.

He argued against depending on LLMs to reach human-level intelligence, as these models need the right training data to answer prompts correctly, thus making them intrinsically unsafe.

LeCun is instead working on a totally new cohort of AI systems that aim to power machines with human-level intelligence, though this could take 10 years to achieve.

The report notes that this is a potentially risky gamble, as many investors are hoping for quick returns on their AI investments. Meta recently saw its value shrink by almost $200 billion after CEO Mark Zuckerbergpledged to up spendingand turn the tech giant into the leading AI company in the world.

Meanwhile, other companies are moving forward with enhanced LLMs in hopes of creating artificial general intelligence (AGI), or machines whose cognition surpasses humans.

For example, this week saw AI firmScaleraise $1 billion in a Series F funding round that valued the startupat close to $14 billion, with founder Alexandr Wang discussing the companys AGI ambitions in the announcement.

Hours later, the French startup called H revealed it had raised $220 million, with CEO Charles Kantor telling Bloomberg News the company is working towardfull-AGI.

However, some experts question AIs ability to think like humans. Among them isAkli Adjaoute, who has spent 30 years in the AI field and recently authored the book Inside AI.

Rather than speculating about whether the technology willthink and reason, he views AIs role as an effective tool, stressing the importance of understanding AIs roots in data and its limitations in replicating human intelligence.

AI does not have theability to understandthe way that humans understand, Adjaoute told PYMNTS CEO Karen Webster.

It follows patterns. As humans, we look for patterns. For example, when I recognize the number 8, I dont see two circles. I see one. I dont need any extra power or cognition. Thats what AI is based on. Its the recognition of algorithms and thats why theyre designed for specific tasks.

Go here to read the rest:

Meta AI Head: ChatGPT Will Never Reach Human Intelligence - PYMNTS.com

What is artificial general intelligence, and is it a useful concept? – New Scientist

If you take even a passing interest in artificial intelligence, you will inevitably have come across the notion of artificial general intelligence. AGI, as it is often known, has ascended to buzzword status over the past few years as AI has exploded into the public consciousness on the back of the success of large language models (LLMs), a form of AI that powers chatbots such as ChatGPT.

That is largely because AGI has become a lodestar for the companies at the vanguard of this type of technology. ChatGPT creator OpenAI, for example, states that its mission is to ensure that artificial general intelligence benefits all of humanity. Governments, too, have become obsessed with the opportunities AGI might present, as well as possible existential threats, while the media (including this magazine, naturally) report on claims that we have already seen sparks of AGI in LLM systems.

Despite all this, it isnt always clear what AGI really means. Indeed, that is the subject of heated debate in the AI community, with some insisting it is a useful goal and others that it is a meaningless figment that betrays a misunderstanding of the nature of intelligence and our prospects for replicating it in machines. Its not really a scientific concept, says Melanie Mitchell at the Santa Fe Institute in New Mexico.

Artificial human-like intelligence and superintelligent AI have been staples of science fiction for centuries. But the term AGI took off around 20 years ago when it was used by the computer scientist Ben Goertzel and Shane Legg, cofounder of

Read more:

What is artificial general intelligence, and is it a useful concept? - New Scientist

OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? – Vox.com

Editors note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altmans tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.

On Monday, OpenAI announced exciting new product news: ChatGPT can now talk like a human.

It has a cheery, slightly ingratiating feminine voice that sounds impressively non-robotic, and a bit familiar if youve seen a certain 2013 Spike Jonze film. Her, tweeted OpenAI CEO Sam Altman, referencing the movie in which a man falls in love with an AI assistant voiced by Scarlett Johansson.

But the product release of ChatGPT 4o was quickly overshadowed by much bigger news out of OpenAI: the resignation of the companys co-founder and chief scientist, Ilya Sutskever, who also led its superalignment team, as well as that of his co-team leader Jan Leike (who we put on the Future Perfect 50 list last year).

The resignations didnt come as a total surprise. Sutskever had been involved in the boardroom revolt that led to Altmans temporary firing last year, before the CEO quickly returned to his perch. Sutskever publicly regretted his actions and backed Altmans return, but hes been mostly absent from the company since, even as other members of OpenAIs policy, alignment, and safety teams have departed.

But what has really stirred speculation was the radio silence from former employees. Sutskever posted a pretty typical resignation message, saying Im confident that OpenAI will build AGI that is both safe and beneficialI am excited for what comes next.

Leike ... didnt. His resignation message was simply: I resigned. After several days of fervent speculation, he expanded on this on Friday morning, explaining that he was worried OpenAI had shifted away from a safety-focused culture.

Questions arose immediately: Were they forced out? Is this delayed fallout of Altmans brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking.

It turns out theres a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI, has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

While nondisclosure agreements arent unusual in highly competitive Silicon Valley, putting an employees already-vested equity at risk for declining or violating one is. For workers at startups like OpenAI, equity is a vital form of compensation, one that can dwarf the salary they make. Threatening that potentially life-changing money is a very effective way to keep former employees quiet.

OpenAI did not respond to a request for comment in time for initial publication. After publication, an OpenAI spokespersonsent me this statement: We have never canceled any current or former employees vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.

Sources close to the company I spoke to told me that this represented a change in policy as they understood it.When I askedthe OpenAI spokespersonif thatstatement representeda change,theyreplied, This statement reflects reality.

On Saturday afternoon, a little more than a day after this article published, Altman acknowledged in a tweet that there had been a provision in the companys off-boarding documents about potential equity cancellation for departing employees, but said the company was in the process of changing that language.

All of this is highly ironic for a company that initially advertised itself as OpenAI that is, as committed in its mission statements to building powerful systems in a transparent and accountable manner.

OpenAI long ago abandoned the idea of open-sourcing its models, citing safety concerns. But now it has shed the most senior and respected members of its safety team, which should inspire some skepticism about whether safety is really the reason why OpenAI has become so closed.

OpenAI has spent a long time occupying an unusual position in tech and policy circles. Their releases, from DALL-E to ChatGPT, are often very cool, but by themselves they would hardly attract the near-religious fervor with which the company is often discussed.

What sets OpenAI apart is the ambition of its mission: to ensure that artificial general intelligence AI systems that are generally smarter than humans benefits all of humanity. Many of its employees believe that this aim is within reach; that with perhaps one more decade (or even less) and a few trillion dollars the company will succeed at developing AI systems that make most human labor obsolete.

Which, as the company itself has long said, is as risky as it is exciting.

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the worlds most important problems, a recruitment page for Leike and Sutskevers team at OpenAI states. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction. While superintelligence seems far off now, we believe it could arrive this decade.

Naturally, if artificial superintelligence in our lifetimes is possible (and experts are divided), it would have enormous implications for humanity. OpenAI has historically positioned itself as a responsible actor trying to transcend mere commercial incentives and bring AGI about for the benefit of all. And theyve said they are willing to do that even if that requires slowing down development, missing out on profit opportunities, or allowing external oversight.

We dont think that AGI should be just a Silicon Valley thing, OpenAI co-founder Greg Brockman told me in 2019, in the much calmer pre-ChatGPT days. Were talking about world-altering technology. And so how do you get the right representation and governance in there? This is actually a really important focus for us and something we really want broad input on.

OpenAIs unique corporate structure a capped-profit company ultimately controlled by a nonprofit was supposed to increase accountability. No one person should be trusted here. I dont have super-voting shares. I dont want them, Altman assured Bloombergs Emily Chang in 2023. The board can fire me. I think thats important. (As the board found out last November, it could fire Altman, but it couldnt make the move stick. After his firing, Altman made a deal to effectively take the company to Microsoft, before being ultimately reinstated with most of the board resigning.)

But there was no stronger sign of OpenAIs commitment to its mission than the prominent roles of people like Sutskever and Leike, technologists with a long history of commitment to safety and an apparently genuine willingness to ask OpenAI to change course if needed. When I said to Brockman in that 2019 interview, You guys are saying, Were going to build a general artificial intelligence, Sutskever cut in.Were going to do everything that can be done in that direction while also making sure that we do it in a way thats safe, he told me.

Their departure doesnt herald a change in OpenAIs mission of building artificial general intelligence that remains the goal. But it almost certainly heralds a change in OpenAIs interest in safety work; the company hasnt announced who, if anyone, will lead the superalignment team.

And it makes it clear that OpenAIs concern with external oversight and transparency couldnt have run all that deep. If you want external oversight and opportunities for the rest of the world to play a role in what youre doing, making former employees sign extremely restrictive NDAs doesnt exactly follow.

This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god?

The companys leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the worlds input into how to do it justly and wisely.

But when theres real money at stake and there are astounding sums of real money at stake in the race to dominate AI it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees those who know the most about whats happening inside OpenAI cant tell the rest of the world whats going on.

The website may have high-minded ideals, but their termination agreements are full of hard-nosed legalese. Its hard to exercise accountability over a company whose former employees are restricted to saying I resigned.

ChatGPTs new cute voice may be charming, but Im not feeling especially enamored.

Update, May 18, 7:30 pm ET: This story was published on May 17 and has been updated multiple times, most recently to include Sam Altmans response on social media.

A version of this story originally appeared in theFuture Perfectnewsletter.Sign up here!

Youve read 1 article in the last month

Here at Vox, we believe in helping everyone understand our complicated world, so that we can all help to shape it. Our mission is to create clear, accessible journalism to empower understanding and action.

If you share our vision, please consider supporting our work by becoming a Vox Member. Your support ensures Vox a stable, independent source of funding to underpin our journalism. If you are not ready to become a Member, even small contributions are meaningful in supporting a sustainable model for journalism.

Thank you for being part of our community.

Swati Sharma

Vox Editor-in-Chief

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Continued here:

OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com